arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
1209.2343
\section{I. Introduction} The interplay between physics and biology has recently not only illuminated important underlying biological mechanisms, but also given birth to new avenues of understanding physics, which in turn has lead to interesting applications. A good example concerns the recent series of works that explore the dynamics of certain species of bacteria in relation to non-Brownian diffusive motion, which can be harnessed to perform work in spite of the limitations of the 2nd Law of Thermodynamics. It has been known for quite some time that the motion of bacteria such as the \textit{Eschericia Coli}, can be described as a random walk. Each bacterium self-propels in a straight line with an almost constant speed, called a "run", after which it "tumbles" by "unbundling" its flagellar motor and propels again in a straight line along a random direction \cite{Berg03}. It is found recently that a bath of such bacteria, without imposing any external forces, exhibits a type of diffusive behavior, called rectification, when placed in an asymmetric ratchet system \cite{Galajda07}. This constitutes an intriguing phenomenon of diffusion even in the absence of detailed balance. In this work, the bacteria are seen to interact inelastically with the funnel ratchets. If they collide with the funnel walls at oblique angles, they would slide along the walls until reaching either ends of the walls or until their next "tumble" events. Subsequently, a series of works explore bacteria bath systems interacting with asymmetric saw-toothed rotors, resulting in similar rectification phenomena \cite{Angelani09,Leonardo10,Sokolov10}. Theoretically, this behavior is first modeled in a two-dimensional system \cite{Wan08,Cates09}. In these works, the bacteria are immersed in a two-dimensional box, in which an asymmetric funnel ratchet system is placed. The bacteria are modeled as particles engaging in the previously mentioned "run-and-tumble" dynamics. Without imposing any external force, the bacteria are rectified to the upside of the funnels after a certain amount of time. By the Navier-Stokes equation, such "run-and-tumble" dynamics should be describable as a time-symmetric process due to the low Reynolds number of swimming bacteria \cite{Galajda08}. However, the randomness of the "bundling" and "unbundling" of the bacteria flagellar motors breaks time-reversal symmetry. Therefore, a hydrodynamics approach is inadequate to give a full picture of this type of dynamics. When the length of the "run" is reduced, rectification is reduced \cite{Wan08}, even though the sliding behavior is maintained. This corresponds to the experimental finding by the authors of Ref. \cite{Galajda07} that non-swimming or dead bacteria do not show any rectification. It can be seen that the ratio of the funnel opening over the run length has an effect on the rectification magnitude. When this ratio is much bigger than unity, the rectification vanishes. This should be expected as non-swimming or dead bacteria are only subject to thermal fluctuations, in which case, thermal equilibrium and detailed balance are restored. On the other hand, it is found that when the sliding behavior is replaced by elastic collisions \cite{Cates09,Reichhardt11}, or by scattering, i.e., when the particles reflect off the funnel walls \cite{Reichhardt11}, the rectification also vanishes. It can thus be deduced that the long "run" lengths of the random walk, coupled with the time-reversal asymmetric interaction of the particles with the funnel walls, breaks detailed balance, inducing the rectification. The fact that such matter is in a strongly non-equilibrium state is crucial to its ability to be manipulated by means of asymmetric geometries alone in generating directed motion \cite{Leonardo10}. The novelty of this phenomenon stems from its contrast with previous observations where external forces are needed to control the random motions of the bacteria. In these previous cases, external forces in the form of electric or magnetic fields \cite{Speer10} or optical fields \cite{Xiao10} are applied in order to generate the directed motion. The phenomenon of rectification with an asymmetric geometric ratchet system therefore opens up the possibility of building self-propelled machines just via the intrinsic dynamics of a bacterial bath. In addition, the ordered behavior that emerges from the dynamics of the bacterial bath can be qualitatively and quantitatively controlled by varying the geometry of the ratchet system \cite{Galajda08,Wan08,Cates09} or just by controlling the amount of oxygen fed to the bacteria \cite{Sokolov10}. So far, the theoretical modeling have only focused on point particles for the sake of simplicity, and the fact that they can reproduce the basic characteristics of the rectification phenomena seen in experiments. However, since bacteria are non-point-like, the rectification phenomena may exhibit richer characteristics in the presence of the asymmetric ratchet systems. In addition, since different bacteria species or cells have non-negligible differences in their aspect ratios, it is possible that each of these different species or cells rectifies differently. In this paper, we show that when the individual particles are replaced with elongated polymers consisting of particles linked to each other by spring forces, the rectification exhibits appreciable quantitative and qualitative differences compared to the individual particle/single-monomer case. We invoke a flux balance argument for the rectification of the single-monomer case and an entropic argument for the quantitative differences that arise in the polymer case in comparison with the single-monomer case. \section{II. Setup and Results} \begin{figure*} \subfigure[]{ \includegraphics[scale=0.5]{box.pdf} } \subfigure[]{ \includegraphics[scale=0.5]{boxa.pdf} } \caption{Simulation box for the six-monomer polymer case at time, (a): $t=0$, and (b): $t=6\times 10^6$.} \label{fig:2box} \end{figure*} Our system consists of a two-dimensional box of dimensions $x\times y=L\times L$ with an array of funnel-shaped barriers placed along the $x=L/2$ line separating the box into chambers $1$ and $2$. The array of funnel-shaped barriers consists of $N_B$ barriers of length $L_B$ arranged in $V$ shapes with opening angles of $2\theta$ and distance $l_d=2L/N_B$ from each other. The box is populated with $N_{poly}$ number of polymers with $N_{mon}$ number of monomers in each polymer. Initially, $1200$ particles are distributed evenly in chambers $1$ and $2$. The dimensions of the box are fixed with $L=99$ in which we set up a system of $28$ barriers of length $5$ with angles tilting from the vertical axis set uniformly at $\theta=30$ degrees (Fig.~\ref{fig:2box}). Following Ref.~\cite{Wan08}, the dynamics of the particles is given as an overdamped Langevin equation as follows: \begin{equation} \label{eq:EOM} \eta\frac{d{\bf R}_l}{dt}={\bf F}_{Bl}+{\bf F}_k({\bf R}_{lj})+{\bf F}_{ml}(t)+{\bf F}_{Tl}, \end{equation} where $\eta=1$ is the phenomenological damping term. No hydrodynamics is considered in our model. ${\bf F}_{Bl}$ describes the force that repels the particles or polymers in the direction perpendicular to the barrier, as well as in radial directions from the ends of the barriers: \begin{equation} \label{eq:FB} {\bf F}_{Bl} = \sum\limits_{i=0}^{N_B}\Bigg[\frac{F_B r_1}{r_B}\delta(r_1){\bf\hat{R}}^{\pm}_{li}+\frac{F_B r_2}{r_B}\delta(r_2){\bf\hat{R}}^{\bot}_{li}\Bigg], \end{equation} where $F_B=30$, $r_B=0.05$, $r_1=r_B-R^{\pm}_{li}$, $R^{\pm}_{li}=|{\bf R}_{l}-{\bf R}_{Bi}\pm L_B{\bf\hat{p}}_{\parallel i}|$, ${\bf\hat{R}^{\pm}}_{li}=|{\bf R}_{l}-{\bf R}_{Bi}\pm (L_B/2) {\bf\hat{p}}_{\parallel i}|/R^{\pm}_{li}$, $r_2=r_B-R^{\perp}_{li}$, $R^{\perp}_{li}=|({\bf R}_{l}-{\bf R}_{Bi})\cdot{\bf\hat{p}}_{\perp i}|$, ${\bf\hat{R}^{\perp}}_{li}=|({\bf R}_{l}-{\bf R}_{Bi})\cdot{\bf\hat{p}}_{\perp i}|{\bf\hat{p}}_{\perp i}/R^{\perp}_{li}$. Here, ${\bf R}_{l}$ is the position of the particle, whereas ${\bf R}_{Bi}$ is the position of the center of the barrier. ${\bf\hat{p}}_{\perp i}$ and ${\bf\hat{p}}_{\parallel i}$ are respectively the unit vectors perpendicular and parallel to the barrier. The first term turns on when the radial distance of the particle from the tip of the barrier is less than $r_B$. Similarly, the second term in Eq.~\eqref{eq:FB} turns on when the perpendicular distance between the particle and the axis of the barrier is less than $r_B$. The first term therefore gives the repelling force in radial directions from the barrier tips. The magnitude of this force is proportional to the distance that the particle traverses past the radius at the barrier tips, i.e., $r_1$. The second term gives the repelling force in the direction perpendicular to the barrier axis, with a magnitude proportional to the distance the particle traverses past the half-thickness of the barrier, i.e., $r_2$. With this setup, the barrier force is able to cancel the component of the particle's ballistic force perpendicular to the barrier upon the particle's approaching the barrier. Therefore, the particle slides along the barrier with a partial alignment. In addition, this barrier force is able to cancel the component of the particle's ballistic force that is in a radial direction from the ends of the barrier. This is so that two barriers will be smoothly connected at the bottom tip of the funnel. Particles are also able to slide smoothly around the upper tips of the funnels. ${\bf F}_k({\bf R}_{lj})$ describes the force between the monomers in each polymer, and is modeled as a spring force as follows: \begin{equation} \label{eq:Fk} {\bf F}_k({\bf R}_{lj}) = k(R-l){\bf\hat{R}}_{lj}, \end{equation} where $k$ is the spring constant, $R=|{\bf R}_l-{\bf R}_j|$ is the relative distance between the monomers, ${\bf\hat{R}}_{lj}=({\bf R}_l-{\bf R}_j)/|{\bf R}_l-{\bf R}_j|$ is the unit displacement vector between the monomers, and $l$ is the equilibrium distance between the monomers, which is set at $1$. The first monomer of each polymer is driven by a ballistic force of magnitude $|{\bf F}_m|$ that induces a run length of $l_r=ndt|{\bf F}_m|$, every $n$ number of time steps of size $dt$ before it reorients. The direction of the reorientation is random and mimics the rotational diffusion in this system. The whole polymer is driven by this ballistic force, with the help of the corresponding spring forces that push and pull between the monomers. In our model, the polymers are not only made to slide along the barriers but also to bend around when encountering the barriers. We adjust the spring constant of the polymers so that they bend naturally around the tips of the funnels. For an $n$ number of time steps with a step size $dt$, between tumbles, a ballistic force of $|{\bf F}_{m1}|$ induces a run length of $ndt|{\bf F}_{m1}|$ in the single-monomer case. As the number of monomers is increased, spring forces pull in the direction opposite to that of the ballistic force of the polymer. The total magnitude of the spring forces corresponds to the number of monomers in the polymer. In order to induce the same effective run length as that in the single-monomer case, the ballistic force has to be increased proportionately with the number of monomers in the polymer. Therefore, we set the ballistic force at $|{\bf F}_m|=|{\bf F}_{m1}|N_{mon}$. We estimate the time for the last monomer to reorient in the direction of the ballistic force of the first monomer, as $\Delta t=(N_{mon}-1)l/|{\bf F}_m|$. For the polymers we consider below, $\Delta t$ is always less than $ndt$, the time period of each run. Also, the rotational diffusion of the polymer, which is characterized by the Rouse time, is shorter than $ndt$. As an illustration, we take the longest polymer in our study, that is the six-monomer polymer, with an effective run length of $10$, resulting from $10000$ time steps of $dt=0.0005$. In order to induce this effective run length for the entire polymer, a ballistic force of $12$ is imposed on the first monomer of the polymer. We estimate the Rouse time for this polymer to be $\tau\sim{l N_{mon}^2}/D=0.05$. We note that the Rouse time is thus smaller than the time period of each run, $ndt=5$. The correlation between consecutive runs of the polymers can thus be considered negligible. \begin{figure*} \begin{center} \includegraphics[scale=0.85]{recm.pdf} \caption{Evolution of normalized particle densities, $\rho_1/\rho_{10}$ for chamber $1$ and $\rho_2/\rho_{20}$ for chamber $2$, comparing between the single-monomer case and the six-monomer polymer case for the effective $l_r=2$. $\rho_{10}$ and $\rho_{20}$ are the initial densities in chambers $1$ and $2$ respectively. The dotted lines represent the single-monomer case whereas the solid lines represent the six-monomer polymer case.} \label{fig:recm} \end{center} \end{figure*} As our first set of simulations, we set the temperature term, ${\bf F}_{Tl}$ to zero. Each single-monomer or polymer is made to move $2000$ time steps of $dt=0.0005$ before randomly reorienting. With these parameters, an effective run length of $2$, which is short enough to mimic Brownian motion, is induced for both the single-monomer and the polymer case. To quantify the rectification, we use the ratio $r=\rho_1/\rho_2$ ($\rho_1$ is the density of particles in chamber $1$ and $\rho_2$ is that in chamber $2$). The initial value of $r$ is $1$. The single-monomer case gives a rectification of magnitude $1.083$. For the six-monomer polymer case with $200$ polymers, the rectification is increased to $1.246$. In Fig.~\ref{fig:recm}, we show this enhancement of the rectification via the time evolution of the densities of particles in the upper and lower chambers normalized with the densities at time, $t=0$. \begin{figure*} \begin{center} \includegraphics[scale=0.85]{rpolyerr.pdf} \caption{Rectification magnitude for systems with varying numbers of monomers, $N_{mon}$, for the effective $l_r=10$, with all other parameters fixed. The error bars are obtained from the mean squared error of the rectification magnitudes of a sample of $10$ simulations.} \label{fig:rpoly} \end{center} \end{figure*} Next, we set the run length at $10$, by increasing the number of steps taken during the run to $10000$, using the same ballistic force as above. We then observe the change in the magnitude of the rectification by varying the number of polymers and their corresponding number of monomers in each polymer according to $N_{poly}\times N_{mon}=1200$ (Fig.~\ref{fig:rpoly}). We find that the rectification increases almost monotonically when the number of monomers is increased starting from the single-monomer case. \begin{figure*} \begin{center} \includegraphics[scale=0.85]{rbl.pdf} \caption{Rectification magnitude for the six-monomer polymer system with varying equilibrium distances between monomers, $l$, for the effective $l_r=2$, with all other parameters fixed.} \label{fig:rbl} \end{center} \end{figure*} Focusing on the six-monomer polymer case with run length $2$, we vary the equilibrium distance between the monomers, $l$, and observe the effect on the rectification (Fig.~\ref{fig:rbl}). We find that as the total equilibrium length of the polymers, $l_p = (N_{mon}-1)l$ ($N_{mon} = 6$ in this case) increases, the magnitude of the rectification increases as well. This is in line with the above observation when the number of monomers in each polymer is increased. \begin{figure*} \begin{center} \includegraphics[scale=0.85]{rang.pdf} \caption{Rectification magnitude for the six-monomer polymer system with varying funnel opening angles, $\theta$, for the effective $l_r=2$, with all other parameters fixed.} \label{fig:rang} \end{center} \end{figure*} Finally, we vary the opening angle $2\theta$ of the barriers for the same six-monomer polymer case with run length $2$ and $l=1$ (Fig.~\ref{fig:rang}). Similar to the single-monomer case, we find that as $\theta$ increases, or when the size of the opening between the barriers, $l_o=l_d-2L_B\sin\theta$ decreases, the magnitude of rectification is increased. However, the rectification for the polymer case is enhanced up to $1.4$ times compared to the single-monomer case when the tilting angle of the barriers from the vertical is increased to $55$ degrees. \section{III. Analyses and Discussions} \subsection{Flux balance argument} \begin{figure*} \begin{center} \includegraphics[scale=0.3]{bddiag.pdf} \caption{Geometry of the space between two funnels depicting two possible trajectories of a particle colliding with one of the barriers. One trajectory is of an elastic collision and the other is of an inelastic collision, where the particle moves along the barrier after collision.} \label{fig:delr} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.85]{figcomp.pdf} \caption{Comparison between values of the rectification magnitude obtained from simulation and analytic analysis for the monomer case with varying run lengths.} \label{fig:comp} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.85]{figcompb.pdf} \caption{Comparison between values of the rectification magnitude obtained from simulation and analytic analysis for the monomer case with varying barrier tilting angles.} \label{fig:compb} \end{center} \end{figure*} In this subsection, we propose an explanation of the rectification seen in the previous section using a flux balance argument. In the region $0\leq x<L/2-h$ and $L/2+h<x\leq L$, where $h = (L_B\cos\theta)/2$, since particles do not undergo collision with the barriers, the conservation of the particle flux, $J$, along the direction of $x$, can be written following Eq. $(5)$ in Ref. \cite{Rivero89} as: \begin{equation} \label{eq:convf1} \partial_t J = -Jr_{tot} - \frac{l_o}{l_d}v^2\nabla\rho(x), \end{equation} where $r_{tot} = r_+ + r_-$ is the sum per unit time of the probability of particles moving from chamber 1 into chamber 2, $r_+$, and vice versa, $r_-$, $v = |{\bf F}_m|$ which can also be interpreted as the particle velocity, and $\rho(x)$ is the averaged particle number density over the $y$ direction defined as $\rho(x) = \frac{1}{\pi L}\int^L_0\rho(x,y)dy$. The factor $l_o/l_d$ before the second term ensures that the flux change in the $x$ direction is restricted by the size of the opening between the tips of the neighboring funnels, $l_o$. In the region $L/2-h\leq x\leq L/2+h$, particles collide with the barriers in an inelastic way, generating a biased impulse in one direction. The gain in flux due to the inelastic collision with the barriers is described as the difference in the trajectory resulting from an inelastic collision and that from an elastic collision. If we define this difference as $\Delta r = r_+ - r_-$, the flux balance equation above is modified as (\cite{Rivero89}): \begin{equation} \label{eq:convf2} \partial_t J = -Jr_{tot} - \frac{l_o}{l_d}v^2\nabla\rho(x) + \frac{l_d-l_o}{l_d}v\rho(x)r_{bias}, \end{equation} where \begin{equation} \label{eq:convf3} r_{bias} = \frac{\Delta r\cos\theta}{2hndt}. \end{equation} The factor $(l_d-l_o)/l_d$ here indicates that the biased flux change in the $x$ direction is caused by the interaction of the particles with the funnels. We calculate $\Delta r$ when the run length, $l_r$, is on a similar scale as the distance between the $V$-shaped funnels, $l_d$, as follows: \begin{equation} \label{eq:convf4} \Delta r = \frac{2 L_B\sin\theta}{l_d}\frac{1}{2\pi}\int^{l_r}_0 dy\int^{\theta_0}_0 (l_r-\frac{y}{\cos\theta_c})(1-\sin\theta_c)d\theta_c, \end{equation} where $\theta_0 = \cos^{-1}(y/l_r)$ and $\theta_c$ is the angle between the barrier and the trajectory of the particle heading toward the barrier. Here, the term $l_{rem} = l_r-\frac{y}{\cos\theta_c}$ is the remainder of the trajectory of the particle colliding with the barrier, whereas the term $l_{rem}(1 - \sin\theta_c)$ is the difference in the trajectory resulting from an inelastic collision and that from an elastic collision (Fig.~\ref{fig:delr}). In a steady state, $\partial_t J = J = 0$. Therefore, Eq.~\eqref{eq:convf2} becomes: \begin{equation} \nabla\rho(x) = \frac{(l_d-l_o)}{l_o}\frac{\rho(x)}{v}r_{bias}, \end{equation} which gives the solution for the averaged particle number density as follows: \begin{equation} \rho(x) = A\exp[\frac{(l_d-l_o)}{l_o}\frac{x}{v}r_{bias}], \end{equation} with $A$ being an integration constant. Designating $\rho_1 = \exp[(L/2+h)/v]$ and $\rho_2 = \exp[(L/2-h)/v]$, the rectification can then be obtained as: \begin{equation} \label{eq:ratio} \rho_1/\rho_2 = \exp[\frac{(l_d-l_o)}{l_o}\frac{2h}{v}r_{bias}]. \end{equation} We next compare the rectification magnitudes obtained from this analytic analysis with those obtained from the numerical simulations for the monomer case with varying run lengths as well as barrier tilting angles. Fig.~\ref{fig:comp} shows that, for the case with varying run lengths, similar with the simulation results, the rectification magnitude increases almost linearly as the run length increases. The integral in Eq.~\eqref{eq:convf4} is found to scale linearly with respect to $l_r$. Therefore, using Eq.~\eqref{eq:ratio}, the rectification magnitude scales like $\sim\exp(l_r)$. For the case with varying barrier tilting angles, Fig.~\ref{fig:compb} shows that the rectification magnitude at first increases slowly as the angle increases. However, when the angle approaches that where the funnels block out completely the monomers on one side of the box from the other side, the increase in the rectification magnitude is enhanced to the point of approaching an asymptote. The asymptote is determined by this angle of total blockage of the monomers by the funnel system, $\theta_d=\sin^{-1}(l_d/(2L_B\sin\theta)$. According to Eq.~\eqref{eq:ratio}, taking into account Eq.~\eqref{eq:convf3} and Eq.~\eqref{eq:convf4}, the rectification magnitude scales as $\sim\exp\theta$ when $\theta$ is small. When $(\theta_d-\theta)$ is small, the rectification magnitude scales as $\sim\exp(1/\sin(\theta_d-\theta))$. \subsection{Entropic argument} \begin{figure*} \begin{center} \includegraphics[scale=0.85]{rtherm.pdf} \caption{Rectification magnitude for six-monomer polymer systems subjected only to the thermal force with varying $\gamma^2$ values.} \label{fig:rtherm} \end{center} \end{figure*} \begin{figure*} \subfigure[]{ \includegraphics[scale=0.5]{elpol6a.pdf} } \subfigure[]{ \includegraphics[scale=0.5]{elpol6.pdf} } \caption{Simulation box for the six-monomer polymer case undergoing elastic collisions with funnel walls at time, (a): $t=0$, and (b): $t=6\times 10^6$.} \label{fig:2box6} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.3]{bdelas.pdf} \caption{Geometry of the space between two funnels depicting the trajectory of a polymer colliding elastically with one of the barriers. The dashed lines represent the trajectories of the first particle of the polymer following Snell's Law, with the dotted lines being the normals to the barriers. The solid lines depict the biased trajectories of the first particle as it is pulled from behind by the spring forces of the following particles.} \label{fig:bdelas} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.85]{recel.pdf} \caption{Evolution of normalized particle densities, $\rho_1/\rho_{10}$ for chamber $1$ and $\rho_2/\rho_{20}$ for chamber $2$ for the six-monomer polymer system with elastic collisions. The dotted line represents the particle density in chamber 2 whereas the solid line represents that in chamber 1.} \label{fig:recel} \end{center} \end{figure*} One significant difference between an elongated chain and an individual particle is that an entropic force is induced when the elongated chain interacts with the anisotropic funnel. As the chains cross the opening, the part of the chain that belongs to chamber 1 has more conformal degrees of freedom than the part in chamber 2. As a result, the chain is pushed to chamber 1. The longer the chain is, the higher the possibility is for the part of the chain in chamber 1 to be longer, giving an increased entropic force. This increases the rectification as we increase the number of monomers in the chain or increase the bond length between the monomers. The dependence of the conformal entropy on the chain length goes as $(N_{mon}-1)l \sim l^2$. In the case of increasing the barrier tilting angle, similar to the monomer case, the entropic force of the polymers is enhanced due to the increased probability of the polymers interacting with the barriers. The entropic force also ensures that the rectification magnitude is enhanced compared to the monomer case, due to more conformal degrees of freedom in the chains. To demonstrate this entropic contribution as a force that pushes the chains to the upside of the funnels, we remove the ballistic force of the chains. Instead, the bath of polymers is subjected only to a diffusive thermal force described by ${\bf F}_{Tl}$, where $\langle F_{Tl} \rangle=0$ and $\langle F_{Tl}(t)F_{Tj}(t') \rangle=2\eta k_B T\delta_{lj}\delta(t-t')$, with $k_B$ being the Boltzmann constant. A bath of $200$ polymers with six monomers each are simulated over $6$ million time steps of $dt=0.0005$. A funnel system similar to the cases above is set up with $\theta=45$ degrees. Interestingly, we observe that even without the ballistic force, in contrast with the single-monomer case, a thermal force with $\gamma^2=2\eta k_B T=1$ induces a rectification of $1.2074$ for these six-monomer polymers. Fig.~\ref{fig:rtherm} shows the magnitude of the rectification as $\gamma^2$ is increased from $1$ to $10$. A higher rectification is observed when the thermal force, that is, when the temperature, is increased. With $\theta=45$ degrees, the $l_o$ is set at $1.18$, which is comparable with the total equilibrium length of the chain, $(N_{mon}-1)l=5$ in this case. It should be noted that the rectification disappears if $l_o$ is much larger than the chain length. On the other hand, when elastic collisions between the particles with the barrier walls are considered, a \textit{reversed} rectification is observed. Fig.~\ref{fig:2box6} shows the simulation box for a system of six-monomer polymers at the initial time and after 6 million time steps. The ballistic force is given to each first particle of every polymer, as $|{\bf F}_m|=2N_{mon}=12$, similar to the system shown in Fig.~\ref{fig:2box}. The run length is also similarly set at $l_r=10$ with a time step of $dt=0.0005$. The barrier setup is the same as that used in the above cases. When the first particle in the polymer hits the barrier wall, it reflects from the wall following Snell's Law. However, its effective post-collision trajectory is ultimately biased such that the angle of reflection is always less than its incident angle (Fig.~\ref{fig:bdelas}). This bias is caused by the pulling spring force of the particle following behind the first particle. As a result, the probability of the polymers to cross from the downside to the upside of the funnels is reduced as compared to their probability to cross from the upside to the downside. Fig.~\ref{fig:recel} shows the evolution of the normalized particle densities in chambers 1 and 2 for this six-monomer polymer case. The magnitude of the reversed rectification is $1.653$. The magnitude of the reversed rectification is reduced when the number of monomers is reduced. This is expected as the magnitude of the pulling spring force decreases proportionately with the number of particles following behind the first particle. When the pulling spring force decreases in magnitude, the bias in the post-collision trajectory of the first particle decreases as well. \section{IV. Conclusions} In this work, we studied the rectification behavior of a bath of polymers performing run-and-tumble dynamics immersed in a two-dimensional box with a system of funnel-shaped barriers. Similar to the single-monomer case, the polymers accumulate on the side of the box to which the funnels open up. As we change the opening angle of the funnels, the rectification behavior of the polymers change in a manner similar to the single-monomer case, that is, the rectification increases as we increase the opening angle. However, we found that the rectification of the polymers over the asymmetric ratchet system is enhanced compared to the single-monomer case. The rectification magnitude increases proportionately with the number of particles composing the polymers. Correspondingly, the rectification magnitude increases as we increase the bond length between the particles in the polymers. We proposed that the additional degree of freedom intrinsic in the non-negligible aspect ratio of the elongated polymers plays a role in the enhancement. We confirmed this explanation by performing similar simulations of the systems of polymers but by removing the ballistic forces of the polymers and instead allowing the polymers to undergo only diffusive thermal fluctuations. With such a system, we observed that in contrast with the single-monomer case which exhibits no rectification, the polymer case shows significant rectification even with small thermal fluctuations. As we increase the magnitude of the fluctuations, the rectification is seen to increase. In a separate simulation with ballistic forces, we replaced the elastic collisions with inelastic ones. In this case, we observed a \textit{reversed} rectification. This is due to the fact that the elongated polymers attempting to cross the barrier system from the downside of the funnels are biased such that they tend to stay on the downside of the funnels. This contrasts with the single-monomer case shown in Refs.~\cite{Cates09,Reichhardt11} where the rectification vanishes when the collisions of the particles with the barriers are elastic. This modification of the rectification dynamics by the non-negligible aspect ratio of the polymers implies that rectification or work done by baths of soft matter in the presence of asymmetric ratchet systems can be further controlled via the internal structure of the soft matter employed. In addition, it also implies that in real systems, the aspect ratio of different species of bacteria may play a role in modifying their rectification magnitudes. \section{V. Acknowledgements} YSJ acknowledges the Max Planck Society (MPG), the Korea Ministry of Education, Science and Technology (MEST), Gyeongsangbuk-Do and Pohang City for the support of the Independent Junior Research Group at the Asia-Pacific Center for Theoretical Physics (APCTP), as well as the National Research Foundation of Korea under the grant funded by the Korean government (MEST) (NRFC1ABA001- 2011-0029960, and 2012R1A1A2009275). \bibliographystyle{prsty}
1612.09318
\section*{Acknowledgements} This work was supported by the National Science Foundation. \subsection{Signal Asymmetry} \label{sec:signal_asymmetry} As described in section~\ref{sec:Measurement_scheme}, the accumulated phase $\Phi$ was read out by resonantly addressing the $H \rightarrow C$ transition with linearly polarised light and monitoring the resulting fluorescence. The state readout laser was switched between orthogonal polarisations, $\hat{X}$ and $\hat{Y}$, at $100$~kHz (with $1.2~\upmu$s of dead time between polarisations) in order to normalize against molecular flux variations. By switching at a rate fast enough that each molecule experienced both polarisations, we achieved nearly photon-shot-noise-limited phase measurements \cite{Kirilov2013}. With a sufficiently wide laser beam, all molecules were completely optically pumped by both laser polarisations during their $\sim$20~$\upmu$s fly-through time. We induced approximately one fluorescence photon from each molecule by projecting the molecule state onto the two orthogonal spin states excited by laser beams with orthogonal polarisations. The rapid switching of the laser polarisation resulted in a modulated PMT signal, $S(t)$, as shown in figure~\ref{fig:modulation}. For the following discussion we consider the polarisation state to switch at a time $t=0$. Immediately after, there is a rapid increase in fluorescence as the molecules in the laser beam are quickly excited; while $\Omega_rt\ll1$, where $\Omega_r\sim2\pi\times1$~MHz is the Rabi frequency on the $H$ to $C$ transition, the fluorescence increases as $S(t)\propto\Omega_r^2\times t^2$. At later times, when $\Omega_rt\gtrsim1$, population is about evenly mixed between the $H$ and $C$ states (since $\Omega_r\gtrsim\gamma_C$); hence, $S(t)$ decays exponentially with a time constant of roughly $1/(2\gamma_C)\approx1~\upmu$s. Molecules that were not present at $t=0$ continue to enter the laser beam, causing $S(t)$ to approach a steady state. The laser is then turned off and the signal decays exponentially with time constant $1/\gamma_C\approx0.5~\upmu$s. The next laser pulse, with orthogonal polarisation, is turned on 1.2~$\upmu$s $\approx2.5/\gamma_C$ after the end of the previous one to prevent significant overlap of contributions to $S(t)$ induced by different polarisations. A low-pass filter in the PMT voltage amplifier with a cut-off frequency of $2\pi\times2$~MHz removed any short timescale dynamics from $S(t)$, and prevented aliasing of high frequency components in the signal given our fixed digitization rate of 5 MSa/s. To determine the fluorescence $F(t)$ produced by each polarisation state, we subtracted a time-dependent background, $B(t^{\prime})$, taken from data with no molecule fluorescence present, i.e. $F(t)=S(t)-B(t^{\prime})$. Examples of the extracted $F(t)$ and $B(t^{\prime})$ time series are shown in figure~\ref{fig:modulation}A and B, respectively. $B(t^{\prime})$ was modulated in time due to scattered light from the state readout laser beam and has a DC electronic offset intrinsic to the PMTs. The first millisecond of data, which contains no fluorescence, was used to determine $B(t^{\prime})$. We assumed that $B(t^{\prime})$ was periodic with the switching of the laser polarisation but did not depend on the polarisation; we inferred its value by averaging together the recorded PMT signal across all polarisation bins for ${\approx}1$~ms of data taken before the arrival of the molecule pulse. Since molecule beam velocity variations caused jitter in the temporal position of the molecule pulse within the trace, 9~ms of data were collected per pulse, despite the fact that only the ${\approx}$2~ms of strong signal with $F(t)\gg B(t^{\prime})$ and ${\approx}1$~ms of background contained useful information for the spin precession measurement. \begin{figure}[!htbp] \centering \includegraphics[width=12.7cm]{fluorescence_background_asymmetry_v2.pdf} \caption{(A) Molecule fluorescence signal $F(t)$ in photoelectrons/s induced by $\hat{X}$ (blue) and $\hat{Y}$ (red) readout laser polarisations. Lines show the raw data for a single trace consisting of an average of 25 molecule pulses. Shaded regions show the waveform averaged over 16 traces. (B) Background signal $B(t^{\prime})$ in photoelectrons/s obtained before the arrival of molecules in the state readout region. (C) Integrated fluorescence signals $F_X$ and $F_Y$ throughout the molecule pulse. Dashed lines denote the region with $F=(F_X+F_Y)/2>3\times10^5~\rm{s}^{-1}$, used as a typical cut for inclusion in eEDM data. Points are spaced by 5~$\upmu$s. (D) Computed asymmetry throughout the molecule pulse. In this example, 18 of the ungrouped asymmetry points are grouped together to compute the mean and uncertainty shown as the grouped asymmetry.}\label{fig:modulation} \end{figure} Integrating $F(t)$ over times associated with pairs of orthogonally polarised laser pulses resulted in signals $F_X, F_Y$. The integration was performed over a specified time window that we denoted as a `polarisation bin'. Figure~\ref{fig:cuts}B shows two typical choices of polarisation bin and illustrates that the extracted eEDM is not significantly affected by this choice. Figure \ref{fig:pixel_plot} shows that most of the extracted quantities did not vary linearly within the polarisation bin (Pol.\ Cycle Time Dependence column). After polarisation binning, the data displayed a fluorescence signal modulated by the envelope of the molecule pulse, as in figure~\ref{fig:modulation}C. Figure~\ref{fig:modulation}D shows the asymmetry, $\mathcal{A}$, computed from these data. The asymmetry is computed for each 10~$\upmu$s polarisation cycle, so that for the $i^{\rm{th}}$ cycle we have \begin{equation} \A_i=\frac{F_{X,i}-F_{Y,i}}{F_{X,i}+F_{Y,i}}. \label{eq:asym_bins} \end{equation} The molecule phase, and hence asymmetry (see equation~\ref{eq:asymmetry}), had a linear dependence on the time after ablation because the molecules precessed in a magnetic field over a fixed distance; the slower molecules, which arrive later, precessed more than the faster molecules, which arrived earlier. We applied a fluorescence signal threshold cut of around $F=(F_X+F_Y)/2\ge3\times10^5~{\rm s}^{-1}$, indicated by dashed lines in figure~\ref{fig:modulation}C,D. Section \ref{sec:data_cuts} describes the threshold choice in detail. To determine the statistical uncertainty in $\A$, $n\approx$ 20--30 adjacent asymmetry points were grouped together. For each group, $j$, centred around a time after ablation $t_j$, we calculated the mean, $\bar{\mathcal{A}}_j$, and the uncertainty in the mean, $\delta\bar{\mathcal{A}}_j$, depicted as red points and error bars in figure~\ref{fig:modulation}D. For smaller $n$, the variance in the sample variance in the mean grows, in which case, error propagation that utilises a weighted mean of data ultimately leads to an understimate of the final statistical uncertainty \cite{Kenny1951}. For larger $n$, the mean significantly varies within the group due to velocity dispersion, and the variance in the mean grows in a manner not determined by random statistical fluctuations. For the range $n=$ 20--30 we observed no significant change in any quantities which were deduced from the measured asymmetry. As described earlier in this section, the background, $B(t^{\prime})$, which we subtracted from the PMT signal, $S(t)$, was observed to be correlated with the fast switching of the readout laser beam polarisation. This can arise, for example, if the two polarisations have different laser beam intensities or pointings. We chose to use a polarisation independent $B(t^{\prime})$ by averaging over the two polarisation states. This produced an asymmetry offset as per equation~\ref{eq:asym_bins} and hence a significant $\Phi^{\rm nr}$ associated with the polarisation-dependent background. However, we did not consider $\Phi^{\rm nr}$ to be a crucial physical or diagnostic quantity. We found that this methodology produced accurate estimates of the uncertainties of quantities computed from the measured asymmetry, as verified by $\chi^2$ analysis of measurements of $\Phi^{\N\E}$. We also found that none of the phase channels of interest changed significantly dependent on whether a polarisation-dependent $B(t^{\prime})$ was used. \subsection{Computing Contrast and Phase} To compute the measured phase $\Phi$ we must also measure the fringe contrast $\mathcal{C}$ and relative laser polarisation angle $\theta=\theta_{\rm read}-\theta_{\rm prep}$, as described in section~\ref{sec:Measurement_scheme}. The $\hat{X}$ and $\hat{Y}$ laser polarisations were set by a $\lambda/2$ waveplate and were determined absolutely by auxiliary polarimetry measurements \cite{Hess2014}. The contrast, defined as either $2\mathcal{C}=-\partial\A/\partial\theta$ or $2\mathcal{C}=\partial\A/\partial\phi$\footnote{Recall that in practice we consider $\mathcal{C}$ as an unsigned quantity for the purposes of data analysis.}, can be determined by dithering either the accumulated phase $\phi$ (by varying $\mathcal{B}_z$) or the relative laser polarisation angle $\theta$. We chose the latter as it could be changed quickly ($<1$~s) by rotating a half-wave plate with a stepper-motor-driven rotation stage. Figure~\ref{fig:fringe} shows the asymmetry as a function of $\theta$, for a range of values of applied magnetic field. We ran the experiment at the steepest part of the asymmetry fringe (where $\theta=\theta^{\rm nr}$) and measured the contrast, $\mathcal{C}_j$, for each asymmetry group, $\bar{\A}_j$, by switching $\theta$ between two angles, $\theta=\theta^{\rm nr}+\Delta\theta\tilde{\theta}$, for $\tilde{\theta}=\pm1$ and $\Delta\theta=0.05$~rad: \begin{equation} \label{eq:contrast_1} \mathcal{C}_j=-\frac{\bar{\A}_{j}(\tilde{\theta}=+1)-\bar{\A}_{j}(\tilde{\theta}=-1)}{4\Delta\theta}. \end{equation} \begin{figure}[!ht] \centering \includegraphics[width=12cm]{pol_fringe.pdf} \caption{Asymmetry vs.\ relative laser polarisation angle $\theta=\theta_{\rm read}-\theta_{\rm prep}$ for several magnetic field values. The value of $\theta$ was dithered about the value $\theta^{\rm nr}$ by $\pm\Delta\theta=\pm0.05$~rad to measure fringe contrast, $\mathcal{C}$. To stay on the steepest part of the fringe, we chose $\theta^{\rm nr}=0$ rad for $\B=\pm20$~mG and $\theta^{\rm nr}=\pi/4$ rad for $\B=\pm1,\pm40$~mG. For these data $|\mathcal{C}|<90\%$ due to low preparation laser power; typically, however, $|\mathcal{C}|\approx 95\%$. Solid lines represent the expected behaviour for a given magnetic field and contrast.} \label{fig:fringe} \end{figure} Because the fringe contrast was fairly constant over the duration of the molecule pulse (figure~\ref{fig:contrast}A), we used a weighted average\footnote{Each $\mathcal{C}_j$ measurement is weighted by its computed uncertainty.} of all $\mathcal{C}_j$ measurements within the cut region for that trace to extract the accumulated phase. We also performed the analysis by fitting $\mathcal{C}_j$ to a 2nd-order polynomial as a function of time during the ablation pulse; this led a better fit to the data, but had no significant effect on the results. We typically found $|\mathcal{C}|\approx95$\%. We believe that this was limited by a number of effects including: imperfect state preparation/readout, decay from the $C$ state back to the $H$ state and dispersion in the spin precession. We also observed that this value was constant over a $\pm2\pi\times1$~MHz detuning range of the state preparation laser (figure~\ref{fig:contrast}B), indicating complete optical pumping over this frequency range. Recall that, as defined, $\C$ can be positive or negative, depending on the sign of the asymmetry fringe slope (see figure~\ref{fig:fringe}, or equation~\ref{eq:contrast_1}). Given that we worked near zero asymmetry where the fringe slope was steepest, and that $\theta^{\rm nr}$ was always chosen to be 0 or $\pi/4$, we computed the total accumulated phase as \begin{equation} \label{eq:phase_1} \Phi_j=\frac{\bar{\A}_j(\tilde{\theta}=+1)+\bar{\A}_j(\tilde{\theta}=-1)}{4\C}+q\frac{\pi}{4}. \end{equation} Here, $q=0,\pm1$ or $\pm2$, corresponds to applied magnetic fields of $\pm1$, $\pm20$, and $\pm40$~mG, respectively. We chose to apply a small magnetic field, $\B=1$ mG when operating at $q=0$ rather than turning off the magnetic field completely so that we would not need to change the experimental switch sequence or data analysis routine for data taken under this condition. Figure~\ref{fig:fringe} illustrates the correspondence between $\theta^{\rm nr}$ and applied magnetic field needed to remain on the steepest part of the asymmetry fringe. \begin{figure}[!htbp] \centering \includegraphics[trim=7mm 0mm 0mm 0mm,scale=0.6]{contrast_combined.pdf} \caption{(A) Contrast vs time after ablation, averaged over 64~traces. The signal threshold window is indicated by dashed lines (cf.\ figure~\ref{fig:modulation}). (B) Contrast vs preparation laser detuning. Error bars were computed as the standard error associated with 64 averaged traces. The solid line is a fit of the form $\mathcal{C}=a\times{\rm tanh}(b\gamma_C^2/(4\Delta_{\rm prep}^2+\gamma_C^2))$, motivated by solution of a classical rate equation.} \label{fig:contrast} \end{figure} \subsubsection{Accounting for Correlated Contrast} \label{sec:corr_contrast} \hspace*{\fill} \\ It was possible for the magnitude of the contrast $|\C|$ to vary between different experimental states. For example, if the state preparation laser detuning or fluorescence signal background were correlated with any of the block switches $\Nsw$, $\Esw$, or $\Bsw$, then contrast would also be correlated with those switches. As described in section~\ref{sec:systematics}, we observed both $\Nsw$- and $\Nsw\Esw$-correlated contrast. The latter was particularly troubling since it could lead to a systematic offset in the measured eEDM if not properly accounted for: since $\A=-\C\cos[2(\phi-\theta)]$, a nonzero $\A^{\mathcal{NE}}$ could occur due to either $\C^{\N\E}$ or $\phi^{\N\E}$. We accounted for contrast correlations by calculating $\C$ separately for each combination of $\Nsw$, $\Esw$, and $\Bsw$ experimental states (`state-averaged' contrast\footnote{Since there were $2^3=8$ different $\Nsw$, $\Esw$, and $\Bsw$ states in each 64-trace block, 64/8 = 8 traces were averaged together to determine the contrast for each experimental state.}): \begin{equation} \label{eq:contrast_2} \C_j(\Nsw,\Esw,\Bsw)=-\frac{\bar{\A}_j(\tilde{\theta}=+1,\Nsw,\Esw,\Bsw)-\bar{\A}_j(-\tilde{\theta}=-1,\Nsw,\Esw,\Bsw)}{4\Delta \theta}. \end{equation} As previously discussed, we averaged or applied a quadratic fit to all $\C_j(\Nsw,\Esw,\Bsw)$ within a molecule pulse to compute $\bar{\C}_j(\Nsw,\Esw,\Bsw)$. The precession phase was calculated from each state-specific asymmetry and contrast measurement (cf.\ equation~\ref{eq:contrast_1}): \begin{equation} \label{eq:phase_2} \Phi_j(\Nsw,\Esw,\Bsw)=\frac{\bar{\A}_j(\Nsw,\Esw,\Bsw)}{2\bar{\C}_j(\Nsw,\Esw,\Bsw)}+q \frac{\pi}{4}, \end{equation} where \begin{equation} \label{eq:asymmetry_avg} \bar{\A}_j(\Nsw,\Esw,\Bsw)=\frac{\bar{\A}_j(\tilde{\theta}=+1,\Nsw,\Esw,\Bsw)+\bar{\A}_j(\tilde{\theta}=-1,\Nsw,\Esw,\Bsw)}{2} \end{equation} is the average asymmetry over the two $\tilde{\theta}$ states in a data block that share identical values of $\Nsw$, $\Esw$ and $\Bsw$. By construction, phases computed from state-averaged contrast are immune to contrast correlations. We also computed phases by ignoring contrast correlations (i.e. treating contrast as independent of $\Nsw,\, \Esw,\, \Bsw$) and the result did not change significantly. \subsubsection{Computing Phase and Frequency Correlations} \label{sec:compute_phase} \hspace*{\fill} \\ After extracting the measured phase $\Phi_j(\Nsw,\Esw,\Bsw)$, we performed the basis change described in equation \ref{eq:general_parity}, from this experiment switch \emph{state} basis to the experiment switch \emph{parity} basis, denoted by $\Phi^p_j$, where $p$ is a placeholder for a given experiment switch parity. We observed that the molecule beam forward velocity, and hence the spin precession time $\tau_j$, fluctuated by up to 10\% over a 10 minute time period. Since $\B_z$ and $g_1$ are known from auxiliary measurements to a precision of around 1\%, we were able to extract $\tau_j$ from each block from the Zeeman precession phase measurement, $\Phi^\B_j=-\mu_{\rm B}g_1\B_z\tau_j$ (see section~\ref{sec:Measurement_scheme_more_detail}). Velocity dispersion caused $\tau_j$ to vary across the molecule pulse with a nominally linear dependence on time after ablation, $t$, however we observed significant deviations from linearity. Thus, we fit $\tau_j$ to a 3rd order polynomial in $t$ in order to evaluate $\bar{\tau}_j$. Then, we evaluated the measured spin precession frequencies defined as \begin{equation} \omega^p_j=\Phi^p_j/\bar{\tau}_j, \label{eq:omega_def} \end{equation} for all phase channels $p$ (see equation \ref{eq:phase_parity} for definition). We extracted the eEDM from $\omega_j^{\N\E}$, which in the absence of systematic errors would be given by $\omega^{\N\E}=-d_e\Eeff$ independent of $j$. From here on we will drop the $j$ subscript that denotes a grouping of $n$ adjacent asymmetry points about a particular time after ablation $t_j$; it is implicit that independent phase measurements were computed from many separate groups of data, each with different values of $t_j$ across the duration of the molecule pulse. At the end of the analysis, and whenever it was convenient to do so, we implicitly performed weighted averaging across the $j$ subscript. \begin{figure}[htbp] \centering \includegraphics[scale=0.9]{omega_nb_vs_eb.pdf} \caption{The difference between magnetic moments of the two $\Omega$-doublet levels as measured by $\omega^{\N\B}$. As expected, this phase component scales linearly with $\E$ and $\B_z$. The constant of proportionality is $\eta \mu_{\rm B}$. Reproduced with permission from \cite{Petrov2014}} \label{fig:delta_g} \end{figure} Other phase channels couuld be used to search for and monitor systematic errors, discussed in detail in section~\ref{sec:systematics}, or to measure properties of ThO, as is the case with $\omega^{\N\B}$. We discuss the latter case here. This channel provided a measure of $\Delta g$, the magnetic moment difference between upper and lower $\Nsw$-levels, arising from perturbations due to other electronic and rotational states \cite{Bickman2009,Petrov2014}. Because this difference limits the extent to which the $\Nsw$ reversal can suppress certain systematic errors \cite{Vutha2010}, it is an important quantity both in our experiment and in other experiments measuring eEDMs in molecules with $\Omega$-doublet structure \cite{JILAEDM}. Figure~\ref{fig:delta_g} illustrates an observed linear dependence $\Delta g/2=\eta \E$, as predicted \cite{Bickman2009,Petrov2014}. Since $\E$ and $\B_z$ are precisely known from auxiliary measurements, the constant $\eta$ can be directly calculated from our angular frequency measurements: \begin{equation} \label{eq:eta_2} \eta=-\frac{\omega^{\N\B}}{\mu_{\rm B}\E\B_z}. \end{equation} Our measured value of $\eta=-0.79\pm0.01~\rm{nm}/\rm{V}$ was approximately half of what one would compute using the methods developed to understand the effect in the PbO molecule \cite{Hamilton2010,Bickman2009}. This discrepancy was subsequently understood as being primarily due to coupling to other fine-structure components in the $^3\Delta$ manifold \cite{Petrov2014,HutzlerThesis}. The $\omega^{\N\B}$ channel illustrates the importance of understanding phase channels besides that corresponding to the eEDM. \subsection{Data Cuts} \label{sec:data_cuts} Three data cuts were applied as part of the analysis: fluorescence rate threshold (see section~\ref{sec:signal_asymmetry}), polarisation bin (see below), and contrast threshold (see below). These cuts made sure that we only used data taken under appropriate experimental conditions (e.g. only when lasers remained locked etc.) and thus ensured a high signal to noise ratio for the data used to extract the eEDM value. We thoroughly investigated how each of these cuts affected the calculated eEDM mean and uncertainty. As previously mentioned, a fluorescence threshold cut of about $F_{\rm{cut}}=3\times10^5$~s$^{-1}$, was applied to each trace (average of 25 molecule pulses) to ensure that the fluorescence rate would always be larger than the background rate. This threshold was chosen to include the maximum number of asymmetry points in our measurement while also excluding low signal-to-noise asymmetry measurements that would increase the overall eEDM uncertainty, as described below. We also removed entire blocks (complete sets of $\Nsw$,$\Esw$,$\Bsw$,$\tilde{\theta}$) of data from the analysis if any of the block's experiment states had $\lesssim0.5$~ms of fluorescence data above $F_{\rm{cut}}$. The count rates of uncorrelated fluorescence photoelectrons exhibit Poissonian statistics. In each block we averaged together four traces with the same experimental configuration. After such averaging, the number of detected photoelectrons within a pair of laser polarisation bins was ${\gtrsim}50$, which was large enough that the photoelectron number distribution closely resembled a normal distribution. Because the asymmetry was defined as a ratio of two approximately normally distributed random variables ($F_X-F_Y$ and $F_X+F_Y$), its distribution was not necessarily normal. Rather, it approached a normal distribution in the limit of large $F_X+F_Y$ \cite{HutzlerThesis}. The same followed for all quantities computed from the asymmetry, including the eEDM. The fluorescence threshold cut therefore ensured that the distribution of eEDM measurements was very nearly normally distributed. Including low-signal data would have caused the distribution to deviate from normal and increase the overall uncertainty. To check that this signal size cut did not lead to a systematic error in our determination of $d_e$, the eEDM mean and uncertainty were calculated for multiple $F_{\rm{cut}}$ values, as shown in figure~\ref{fig:cuts}. If the cut was increased above $6\times10^5$~s$^{-1}$ the mean value was seen to move slightly (but within the computed uncertainties), and the uncertainty to increase. However, for all plausible values of the cuts the resulting value of $d_e$ was consistent, within uncertainties, with our final stated value. \begin{figure}[!htbp] \centering \includegraphics[width=14cm]{edm_vs_cut_new.pdf} \caption{Measured eEDM mean and uncertainty as a function of (A) fluorescence signal threshold and (B) polarisation bin size and position. For the former, a value of $3\times10^5$~s$^{-1}$ was used for the final result. For the latter, the two leftmost data points correspond to the polarisation bins used.} \label{fig:cuts} \end{figure} As described in section~\ref{sec:signal_asymmetry}, data points within a polarisation bin were averaged together when calculating the asymmetry (cf. figure~\ref{fig:modulation}A). These data points were separated by 200~ns. Numbering these points from when the readout laser beam polarisation is switched, we binned points 5--20 or points 0--25, depending on the analysis routine (see section~\ref{ssec:differences_between_data_analysis_routines} below) when reporting our final result. The former choice was made to cut out background signal and overlapping fluorescence between polarisation states while retaining as much of the fluorescence signal as possible whereas the latter was chosen to minimize the statistical uncertainty given the lack of evidence for systematic errors that depended on time within the polarisation switching cycle. As shown in figure~\ref{fig:cuts}, we checked for systematic errors associated with this choice by also using several different polarisation bins to compute the eEDM. The eEDM uncertainty increased, as expected, for polarisation bins that cut out data with significant fluorescence levels, but the mean values were all consistent with each other within their respective uncertainties. In order for a block of data to be included in our final measurement, we also required that each of the 8 $(\Nsw,\Esw,\Bsw)$ experiment states had a measured fringe contrast above 80\%. The primary cause of blocks failing to meet this requirement was the state preparation laser becoming unlocked. This cut resulted in less than 1\% of blocks being discarded. If the contrast cut was lowered, or not applied at all, the eEDM mean and uncertainty change by less than 3\% of our statistical uncertainty. As with the signal threshold, if this cut threshold was increased to 90\%, close to the average value of contrast, $\mathcal{C}$, then a larger fraction of data was neglected and the eEDM uncertainty was seen to increase. For all the cuts discussed, we significantly varied the associated cut and in some cases removed it entirely. The eEDM mean and uncertainty were very robust against significant variation of each of these cuts, and the cuts were chosen before the blind offset applied to the eEDM channel was removed. \subsection{Differences Between Data Analysis Routines} \label{ssec:differences_between_data_analysis_routines} As a systematic error check, we performed three independent analyses of the data. Each routine followed the general analysis method described above, but varied in many small details such as background subtraction method, cut thresholds, numbers of points grouped together to compute asymmetry, polarisation bin choice, etc. The analyses differed in the polynomial order of the fits applied to both the contrast $\mathcal{C}$ and the precession time $\tau$ vs.\ time after ablation $t$. The analyses also differed in the inclusion of a subset of the eEDM data that featured a particularly large unexplained signal in the $\omega^\N$ channel. Each of the three analyses independently computed the eEDM channel and the systematic error in the eEDM channel. The uncertainties for all three routines were nearly identical, and the means agreed to within $\Delta\omega^{\N\E}<3~\rm{mrad/s}$, which is within the statistical uncertainty of the measurement $\delta\omega^{\mathcal{NE}}=\pm4.8$~mrad/s. The eEDM mean and uncertainty were averaged over the three analyses to produce the final result. \subsection{EDM Mean and Statistical Uncertainty} \begin{figure}[!htbp] \centering \includegraphics[width=15.5cm]{EDM_statistics.pdf} \caption{The data set associated with our reported eEDM limit. \textbf{(A)} Variations in the extracted eEDM as a function of position within the molecular pulse. \textbf{(B)} Over 10,000 blocks of data were taken over a combined period of about two weeks. \textbf{(C)}-\textbf{(D)} The distribution of $\sim$200,000 separate eEDM measurements (black) matches very well with a Gaussian fit (red). The same data is plotted with both a linear and a log scale. In these histograms the mean of each individual measurement was normalized to its corresponding error bar.} \label{fig:statistics} \end{figure} The final data set used to report our result is shown in figure~\ref{fig:statistics}. It consisted of ${\sim}10^4$ blocks of data taken over the course of $\sim$2~weeks (figure~\ref{fig:statistics}B); each block contains ${\approx}20$ separate eEDM measurements distributed over the duration of the molecule pulse (Figure~\ref{fig:statistics}A). All ${\approx}2\times10^5$~measurements were combined with standard Gaussian error propagation to obtain the reported mean and uncertainty. Figure~\ref{fig:statistics}C,D shows histograms of all measurements on a linear (C) and log (D) scale, showing the distribution agrees extremely well with a Gaussian fit. The resulting uncertainty was about 1.2 times that expected from the photoelectron shot-noise limit, taking into account the photoelectron rate from molecule fluorescence, background light, and PMT dark current. When the eEDM measurements were fit to a constant value, the reduced $\chi^2$ was $0.996\pm0.006$ where this uncertainty represents the $1\sigma$ width of the $\chi^2$ distribution for the appropriate number of degrees of freedom. \begin{figure}[htbp] \centering \includegraphics[width=10cm]{edm_stats_v3.pdf} \caption{Measured $\omega^{\mathcal{NE}}$ values grouped by the states of $\left|\mathcal{B}_{z}\right|,$~$\left|\mathcal{E}_{z}\right|$, $\hat{k}\cdot\hat{z}$, and each superblock switch, before systematic error corrections. Reproduced with permission from \cite{Baron2014}.} \label{fig:edm_vs_superblock} \end{figure} When computing the eEDM result, data from superblocks were averaged together. The mean could be either weighted or unweighted by the statistical uncertainty in each superblock state. Weighted averaging minimized the resulting statistical uncertainty, but unweighted averaging could suppress systematic errors that have well-defined superblock parity from entering into the extracted value for $\omega^{\N\E}$. Due to molecule number fluctuations, each block of data had a different associated uncertainty. However, roughly equal amounts of data were gathered for the $2^4$ superblock states defined by the state readout parity $\Psw$, field plate lead configuration $\Lsw$, state readout laser polarisation $\Rsw$, and global laser polarisation $\Gsw$. For the reported eEDM value, unweighted averaging (or to be precise, performing the basis change prescribed by equation \ref{eq:general_parity}) was used to combine data from the different $\Psw$, $\Rsw$, $\Lsw$, $\Gsw$ experiment states, since there were known systematic errors with well-defined superblock parity that were suppressed by these switches (see, for example, sections \ref{sssec:stark_interference_between_E1_and_M1_transition_amplitudes} and \ref{ssec:asymmetry_effects}). Note, however, that figure~\ref{fig:edm_vs_superblock} shows that these systematic errors produced no significant eEDM shift, and that the overall uncertainty was comparable (within 10\%) when the data was combined with weighted or unweighted averaging. Unequal amounts of data were collected for the $\B_z$, $\E$, and $\hat{k}\cdot\hat{z}$ experimental states. For example, 40\% (60\%) of data were gathered with the state preparation and readout laser beams pointing east (west), $\hat{k}\cdot\hat{z}=-1 (+1)$. To account for this, we performed state-by-state analysis of the systematic errors: the primary systematic errors (described in section \ref{sssec:correlated_laser_parameters}) were allowed to depend on the magnitude of the magnetic field (though $\B_z=1,40$ mG were grouped together), and the pointing direction, and separate systematic error subtractions were performed for each ($\B_z$, $\hat{k}\cdot\hat{z}$) state. After this subtraction, the systematic uncertainties were added in quadrature with the statistical uncertainties for each state, and the data from each state was averaged together weighted by the resulting combined statistical and systematic uncertainties. The reported statistical uncertainty was obtained via the method above assuming no systematic uncertainty. The reported systematic uncertainty was defined such that the quadrature sum of the reported statistical and systematic uncertainties gives the same value as when incorporating the state-by-state analysis. A description of the methods used to evaluate the systematic error and the systematic uncertainty in the measurement is provided in section \ref{ssec:total_systematic_error_budget}. To prevent experimental bias we performed a blind analysis by adding an unknown offset to the mean of the eEDM channel, $\omega^{\mathcal{NE}}$. The offset was randomly generated in software from a Gaussian distribution with standard deviation $\sigma=150$~mrad/s and mean zero. The mean, statistical error, procedure for calculating the systematic error, and procedure for computing the reported confidence interval were all determined before revealing and subtracting the blind offset. \subsection{Confidence Intervals} \label{ssec:confidence_intervals} A classical (i.e.\ frequentist) confidence interval \cite{Riley2006} is a natural choice for reporting the result of an eEDM measurement. For repeated and possibly different experiments measuring the eEDM, the frequency with which the confidence intervals include or exclude the value $\de=0$ suggests whether the results are consistent or inconsistent, respectively, with the Standard Model. Furthermore, the confidence level (C.L.) represents an objective measure of the \emph{a priori} probability that the confidence interval assigned to any one of these measurements, selected at random, includes the unknown true value of the eEDM $d_{e,{\rm true}}$. Since no statistically significant eEDM has yet been observed, the recent custom has been for electron eEDM experiments to report an upper limit at the 90\% C.L. \cite{Regan2002,Hudson2011}. The proper interpretation of such limits is that if the experiment were performed a large number of times, and the confidence interval were \emph{computed in the same way} for each experimental trial, $d_{e,\rm true}$ would fall within the interval 90\% of the time. Feldman and Cousins pointed out that in order for this interpretation to be valid, the confidence interval construction must be independent of the result of the measurement \cite{Feldman1998}. If the procedure for constructing 90\% confidence intervals is chosen contingent upon the measurement outcome, the resulting intervals may `undercover', i.e. fail to include the true value more than 10\% of the time. This happens, for example, if an upper bound is reported whenever the measured result falls within a few standard deviations of zero, and a two-sided confidence interval is reported whenever the measured result is significant at more than a few-sigma level. Feldman and Cousins termed this inconsistent approach `flip-flopping'. In order to avoid flip-flopping, we chose a confidence interval construction, the Feldman-Cousins method described in reference~\cite{Feldman1998}, that consistently unifies these two limits. We applied this method to a model with Gaussian statistics, in which the measured magnitude of the eEDM channel, $x=|\omega^{\N\E}_{T,{\rm meas}}|$, is sampled from a folded Gaussian distribution \begin{equation}\label{eq:foldednormal} P(x|\mu)=\frac{1}{\sigma\sqrt{2\pi}}\left(\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]+\exp\left[-\frac{(x+\mu)^2}{2\sigma^2}\right]\right), \end{equation} where the location parameter is the unknown true magnitude of the eEDM channel, $\mu=|\omega^{\N\E}_{T,{\rm true}}|$, and the scale parameter $\sigma$ is equal to the quadrature sum of the statistical and systematic uncertainties given in equation~\ref{eq:wNEt_num_err_comb} and at the bottom of table~\ref{tbl:syst_error}. The central idea of the Feldman-Cousins approach is to use an ordering principle which, for each possible value of the parameter of interest $\mu$, ranks each possible measurement outcome $x$ by the `strength' of the evidence it provides that $\mu$ is the true value. The values of $x$ that provide the strongest evidence for each value of $\mu$ are included in the confidence band for that value. In the Feldman-Cousins method, the metric for the strength of evidence is the likelihood of $\mu$ given that $x$ is measured [i.e. $\mathcal{L}(\mu|x) = P(x|\mu)$], divided by the largest probability $x$ can possibly achieve for any value of $\mu$. The denominator in this prescription takes into account the fact that an experimental result that is somewhat improbable under a particular hypothesis can still provide good evidence for that hypothesis if the result is similarly improbable under even the most favorable hypothesis. This approach has its theoretical roots in likelihood ratio testing \cite{Stuart1999}. Our specific procedure for computing confidence intervals was a numerical calculation performed using the following recipe (cf.\ figure~\ref{fig:fc_conf_int}): \begin{enumerate} \item{Construct the confidence bands on a Cartesian plane, of which the horizontal axis represents the possible values of $x$ and the vertical axis the possible values of $\mu$. Divide the plane into a fine grid with $x$-intervals of width $\Delta_x$ and $\mu$-intervals of height $\Delta_{\mu}$. We will consider only the discrete possible values $x_i = i \Delta_x$ and $\mu_j = j \Delta_{\mu}$, where the index $i$($j$) runs from $0$ to $n_x$($n_{\mu}$).}\label{it:setup} \item{For all values of $i$, maximize $P(x_i|\mu_j)$ with respect to $\mu_j$. Label the maximum points $\mu^{\mathrm{max},i}$.}\label{it:ymax} \item{For some value of $j$, say $j=0$, compute the likelihood ratio $R(x_i) = P(x_i|\mu_j)/P(x_i|\mu^{\mathrm{max},i})$ for every value of $i$.}\label{it:R} \item{Construct the `horizontal acceptance band' at $\mu_j$ by including values of $x_i$ in descending order of $R(x_i)$. Stop adding values when the cumulative probability reaches the desired C.L. of 90\%, i.e., $\displaystyle \sum_{x_i}P(x_i|\mu_j)\Delta_x = 0.9$.}\label{it:horiz} \item{Repeat steps (\ref{it:R})--(\ref{it:horiz}) for all values of $j$.} \item{To determine the reported confidence interval, draw a vertical line on the plot at $x = |\omega^{\N\E}_{T,{\rm meas}}|$. The 90\% confidence interval is the region where the line intersects the constructed confidence band.} \end{enumerate} \begin{figure}[!ht] \centering \includegraphics[width=0.49\textwidth]{FC_conf_int.pdf} \includegraphics[width=0.49\textwidth]{Conf_int_compare.pdf} \caption{Left: Feldman-Cousins confidence bands for a folded Gaussian distribution, constructed as described in the text, for a variety of confidence levels. Each pair of lines indicates the upper and lower bounds of the confidence band associated with each C.L. To the left of the $x$-intercepts, the lower bounds are zero. Confidence bands are plotted as a function of the possible measured central values $x$ scaled by the standard deviation $\sigma$, and our result is plotted as a vertical dot-dashed line. The $\mu$-value of the point at which our result line intersects with each of the colored lines gives the upper limit of our measurement at different C.L.'s. Right: Comparison between 90\% confidence intervals computed using three different methods, described in the text. Confidence bands are plotted as a function of the possible measured central values of a quantity $x$ scaled by the standard deviation $\sigma$. Our result, $|\omega^{\N\E}_{T,{\rm meas}}|/\sigma=0.46$, is plotted as a vertical dot-dashed line. The $\mu$-values of the points at which our result line intersects the upper and lower line for each method give the upper and lower bounds of three possible 90\% confidence intervals for our measurement. To avoid invalidating the confidence interval by flip-flopping, our result should be interpreted using the Feldman-Cousins method, which we chose before unblinding.} \label{fig:fc_conf_int} \end{figure} The left-hand plot in figure~\ref{fig:fc_conf_int} was generated using the prescription above at several different C.L.'s. Note that the 90\% confidence intervals switch from upper bounds to two-sided confidence intervals when the value of $|\omega^{\N\E}_{T,{\rm meas}}|$ becomes larger than $1.64 \sigma$. This is the level of statistical significance required to exclude the value $\de = 0$ from a 90\% C.L.\ central Gaussian confidence band. From equation~(\ref{eq:wNEt_num_err_comb}), we find $|\omega^{\N\E}_{T,{\rm meas}}|=0.46\sigma$ with $\sigma=5.79~\rm{mrad}/\rm{s}$. In our confidence interval construction, this corresponds to an upper bound of $|\wNEt|<1.9\sigma=11~\rm{mrad}/\rm{s}$ (90\% C.L.). A comparison between three different 90\% confidence interval constructions for small values of $\mu$ is shown in the right-hand plot of figure~\ref{fig:fc_conf_int}. The black dashed lines represent the central confidence band for the signed values (rather than the magnitude) of $\mu$ and $x$, where $\mu$ is the mean of a Gaussian probability distribution in $x$. The blue lines give an upper bound constructed by computing the the value of $\mu$ such that the cumulative distribution function for the folded Gaussian in equation~\ref{eq:foldednormal} is equal to $0.9$ for each value of $x$. It should be noted that this upper bound is more conservative than a true classical 90\% confidence band, as it overcovers for small values of $\mu$ (e.g., if the true value were $\mu_{\rm true} < 1.64 \sigma$, the confidence intervals of 100\% of experimental results would include $\mu_{\rm true}$). We nevertheless include this construction for comparison because we believe that previous experiments have reported EDM upper bounds using this method \cite{Hudson2011,Griffith2009,Regan2002}. These intervals have a valid interpretation as Bayesian `credible intervals' conditioned on a uniform prior for $\mu$ \cite{Feldman1998}. Finally, the red lines represent the Feldman-Cousins approach described here, which unifies upper limits and two-sided intervals. For our measurement outcome, indicated by the vertical dot-dashed line, the Feldman-Cousins intervals yield a $7\%$ larger eEDM limit than the folded Gaussian upper bound would have. \subsection{Physical Quantities} \label{ssec:physical_quantities} Under the most general interpretation, our experiment is sensitive to any $P$- and $T$-violating interaction that produces an energy shift $\wNEt$. The eEDM is not the only such predicted interaction for diatomic molecules \cite{Kozlov1995}, and in particular a $P$- and $T$-odd nucleon-electron scalar-pseudoscalar interaction would also manifest as a $\Nsw\Esw$-odd phase in our experiment. Thus, we write \begin{equation} \wNEt=-d_e\Eeff + W_{\rm S}C_{\rm S},\footnote{Note that the sign of the $C_{\rm S}$ term is opposite to that used, incorrectly, in our original paper \cite{Baron2014}. In addition, here $W_{\rm S}$ differs in magnitude from the related quantity $W_{\rm T,P}$ given explictly in \cite{Denis2016,Skripnikov2016}. A detailed discussion of the sign and notational conventions for this Hamiltonian is provided in \ref{sec:sign_conventions}.} \end{equation} where $W_{\rm S}$ is a (calculated) energy scale specific to the species of study \cite{Skripnikov2013,Dzuba2011a,DzubaErratum2012,Denis2016,Skripnikov2016} and $C_{\rm S}$ is a dimensionless constant characterizing the strength of the $T$-violating nucleon-electron scalar-pseudoscalar coupling relative to the ordinary weak interaction. We can use our measurement to set an upper limit on $\de$ by assuming that $C_{\rm S}=0$ and that $\wNEt$ is therefore entirely attributable to the eEDM. Taking the effective electric field to be the unweighted mean of the two most recent calculations of this quantity \cite{Denis2016,Skripnikov2016}, $\Eeff=78~{\rm GV/cm}$, we can interpret our result in equation~(\ref{eq:wNEt_num_err_comb}) as: \begin{align} \de&=(-2.2\pm4.8)\times10^{-29}~e\cdot{\rm cm}\\ \Rightarrow|\de|&<9.3\times10^{-29}~e\cdot{\rm cm}~(90\%\,{\rm C.L.}), \end{align} where the second line is obtained by appropriately scaling the upper bound on $\wNEt$ derived in section \ref{ssec:confidence_intervals}. If, instead, we assume that $d_e=0$, our measurement of $\wNEt$ in ThO can be restated as a measurement of $C_{\rm S}$. Using an unweighted mean of the most recent calculations of the interaction coefficient, $W_{\rm S} = -2\pi\times 282~{\rm kHz}$ \cite{Denis2016,Skripnikov2016}, we obtain: \begin{align} C_{\rm S}&=(-1.5\pm3.2)\times10^{-9}\\ \Rightarrow|C_{\rm S}|&<6.2\times10^{-9} \:(90\%\,{\rm C.L.}), \end{align} which, at the time, was an order of magnitude smaller than the existing best limit set by the ${}^{199}$Hg EDM experiment \cite{Swallows2013}, and is still a factor of 2 smaller than the recently improved limit from the same group \cite{Heckel2016}. \subsection{Determining Systematic Errors and Uncertainties} \label{ssec:determining_systematic_uncertainty} \begin{table}[tbp] \caption{\label{tbl:syst_check}Parameters varied during our systematic error search. Left: Category I Parameters --- These were ideally zero under normal experimental running conditions and we were able to vary them significantly from zero. For each of these parameters direct measurements or limits were placed on possible systematic errors. Right: Category II Parameters --- These had no single ideal value. Although direct limits on these systematic errors could not be derived, they served as checks for the presence of unanticipated systematic errors. See the main text for more details on all the systematic errors referenced.} \begin{minipage}[t]{0.5\textwidth} \begin{tabular}[t]{l} \br Category I Parameters\\ \mr \textbf{Magnetic Fields}\\ - Non-reversing $\B$-field: $\B_{z}^{\rm{nr}}$\\ - Transverse $\B$-fields: $\B_{x},\B_{y}$\\ (both even and odd under $\Bsw$)\\ - $\B$-field gradients: \\ $\frac{\partial\B_{x}}{\partial x},\frac{\partial\B_{y}}{\partial x},\frac{\partial\B_{y}}{\partial y},\frac{\partial\B_{y}}{\partial z},\frac{\partial\B_{z}}{\partial x},\frac{\partial\B_{z}}{\partial z}$\\ (both even and odd under $\Bsw$)\\ - $\Esw$ correlated $\B$-field: $\B^{\E}$ (to simulate\\ $\vec{v}\times\vecE$/geometric phase/leakage current)\\ \textbf{Electric Fields}\\ - Non-reversing $\E$-field: $\E^{\rm{nr}}$\\ - $\E$-field ground offset\\ \textbf{Laser Detunings}\\ - State preparation/readout lasers: $\Delta_{{\rm prep}}^{\rm nr}$, $\Delta_{\rm{read}}^{\rm nr}$\\ - $\Psw$ correlated detuning, $\Delta^{\mathcal{P}}$\\ - $\Nsw$ correlated detunings: $\Delta^{\N}$\\ \textbf{Laser Pointings}\\ - Change in pointing of prep./read lasers\\ - State readout laser $\hat{X}/\hat{Y}$ dependent pointing\\ - $\Nsw$ correlated laser pointing\\ - $\Nsw$ and $\hat{X}/\hat{Y}$ dependent laser pointing\\ \textbf{Laser Powers}\\ - $\Nsw\Esw$ correlated power $\Omega_{\rm r}^{\N\E}$\\ - $\Nsw$ correlated power $\Omega_{\rm r}^{\N}$\\ - $\hat{X}/\hat{Y}$ dependent state readout laser power, $\Omega_{\rm r}^{XY}$\\ \textbf{Laser Polarisation}\\ - Preparation laser ellipticity, $S_{{\rm prep}}$\\ \textbf{Molecular Beam Clipping}\\ - Molecule beam clipping along $\hat{y}$ and $\hat{z}$\\ (changes $\left\langle v_{y}\right\rangle $,$\left\langle v_{z}\right\rangle $,$\left\langle y\right\rangle $,$\left\langle z\right\rangle $ of molecule beam)\\ \br \end{tabular} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \begin{tabular}[t]{l} \br Category II Parameters \\ \mr \textbf{Laser Powers}\\ - Power of prep./read lasers \\ \textbf{Experiment Timing}\\ - $\hat{X}$/$\hat{Y}$ polarisation switching rate\\ - Number of molecule pulses averaged \\ per experiment trace\\ \textbf{Analysis}\\ - Signal size cuts, asymmetry size cuts,\\ contrast cuts\\ - Difference between two PMT detectors\\ - Variation with time within molecule pulse\\ (serves to check $v_{x}$ dependence)\\ - Variation with time within polarisation \\ switching cycle\\ - Variation with time throughout the \\ full data set (autocorrelation)\\ - Search for correlations between all channels \\ of phase, contrast and fluorescence signal \\ - Correlations with auxiliary measurements\\ of $\B$-fields, laser powers, vacuum pressure\\ and temperature\\ - 3 independent data analysis routines\\ \br \end{tabular} \end{minipage} \end{table} In total, we varied more than 40 separate parameters during our search for systematic errors (see Table \ref{tbl:syst_check}). These fall into two categories. Category I contains parameters $P$ which are optimally zero; $P\neq0$ represents an experimental imperfection. We were able to use experimental data to put a direct limit on the size of possible systematic errors proportional to these parameters. Category II contains parameters that have no optimum value and which we could vary significantly without affecting the nature of the spin precession measurement. The variation of these parameters could reveal systematic errors and serve as a check that we understood the response of our system to those parameters, but no quantitative bounds on the associated systematic errors were derived. For each Category I parameter $P$, we exaggerated the size of the imperfection by a factor greater than $10$, if possible, relative to the maximum size of the imperfection under normal operating conditions, $\bar{P}$, which was obtained from auxiliary measurements. Following previous work \cite{Regan2002,Griffith2009,Hudson2011}, we assumed a linear relationship between $\omega^{\mathcal{NE}}$ and $P$, and extracted the sensitivity of the $\omega^{\mathcal{NE}}$ to parameter $P$, $\partial\omega^{\mathcal{NE}}/\partial P$. The systematic error under normal operating conditions was computed as $\omega_P^{\N\E}=(\partial\omega^{\N\E}/\partial P)\bar{P}$. The statistical uncertainty in the systematic error (henceforth referred to as the systematic uncertainty) $\delta\omega_P^{\N\E}$ was obtained from linear error propagation of uncorrelated random variables, \begin{equation} \delta\omega_P^{\N\E}= \sqrt{\left(\frac{\partial\omega^{\N\E}}{\partial P}^{\:}\delta\bar{P}\right)^{2}+\left(\bar{P}^{\:}\delta\frac{\partial\omega^{\N\E}}{\partial P}\right)^{2}}, \end{equation} where $\delta\bar{P}$ is the uncertainty in $\bar{P}$ and $\delta\partial\omega^{\N\E}/\partial P$ is the uncertainty in $\partial\omega^{\N\E}/\partial P$. For parameters that had been observed to produce statistically significant shifts in $\omega^{\mathcal{NE}}$, such as the non-reversing electric field, $\E^{\rm{nr}}$, we monitored the size of the systematic error throughout the reported data set during \textit{Intentional Parameter Variations} (described in section \ref{sec:Measurement_scheme_more_detail}) and deducted this quantity from $\omega^{\mathcal{NE}}$ to give a value of the spin precession frequency due to T-odd interactions in the H state of ThO, $\wNEt=\omega^{\N\E}-\sum_{P}\omega_P^{\N\E}$. Most Category I parameters did not cause a statistically significant $\omega^{\N\E}_P$ and were not monitored. For these parameters, we did not subtract $\omega_P^{\N\E}$ from $\omega^{\mathcal{NE}}$, but rather included an upper limit of $\left[(\omega_P^{\N\E})^{2}+(\delta\omega_P^{\N\E})^{2}\right]^{1/2}$ in the systematic uncertainty on $\wNEt$, or chose to omit this parameter from the systematic error budget altogether based on the criteria described in section \ref{ssec:total_systematic_error_budget}. Where applicable, we also fit higher-order polynomial functions to $\omega^{\mathcal{NE}}$ with respect to $P$ during the systematic error searches. No significant increase in the systematic uncertainty was observed using such fits and hence the contributions to the systematic error budget in Table \ref{tbl:syst_error} were all estimated from linear fits. We note, however, that certain non-linear dependences of $\omega^{\mathcal{NE}}$ on $P$ could lead to underestimates of the systematic uncertainty, for example if $\omega^{\mathcal{NE}}$ has a small (large) nonzero value for large (small) values of $P$. In efforts to avoid this, data were taken over as wide a range as possible, it is, however, always possible that such non-linear dependence is present between the parameter values for which we took data. We had no models by which non-linear dependence could manifest by variation of the parameters investigated, so we believe the procedure outlined above produced accurate estimates of the systematic errors. \subsection{Systematic Errors Due to Imperfect Laser Polarisations} \label{ssec:systematic_errors_due_to_imperfect_laser_polarizations} The dominant systematic errors in our experiment were due to imperfections in the laser beams used to prepare the molecular and read out the molecular state. Non-ideal laser polarisations combined with laser parameters correlated with the expected eEDM signal resulted in three distinct systematic errors which we refer to as the $\E^{\rm{nr}}$, $\Omega_{\rm r}^{\N\E}$, and Stark Interference (S.I.) systematic errors. In this section, we model the effects of several types of polarisation imperfections on the measured phase $\Phi$ (sections~\ref{sssec:stark_interference_between_E1_and_M1_transition_amplitudes} and \ref{sssec:AC_stark_shift_phases}) and discuss the correlated laser parameters that couple to these polarisation imperfections to result in systematic errors (section~\ref{sssec:correlated_laser_parameters}). We then discuss how we were able to suppress and quantify the residual systematic errors in the eEDM experiment (sections~\ref{sssec:suppression_of_the_AC_stark_shift_phases} and \ref{sssec:correlated_laser_parameters}). \subsubsection{Idealized Measurement Scheme with Polarisation Offsets} \label{sssec:idealized_measurement_scheme_with_polarization_offsets} As described in section \ref{sec:Measurement_scheme}, the molecules initially enter the state preparation laser beam in an incoherent mixture of the two states $|\pm,\Nsw\rangle$. The bright state $|B(\hat{\epsilon}_{{\rm prep}},\Nsw,\Psw_{{\rm prep}})\rangle$ is then optically pumped away through $|C,\Psw_{{\rm prep}}\rangle$ leaving behind the dark state $|D(\hat{\epsilon}_{{\rm prep}},\Nsw,\Psw_{{\rm prep}})\rangle$ as the initial state for the spin precession. The molecules then undergo spin precession by angle $\phi$ evolving to a final state $|\psi_{f}\rangle=U(\phi)|D(\hat{\epsilon}_{{\rm prep}},\Nsw,\Psw_{{\rm prep}})\rangle$ where $U(\phi)=\sum_{\pm}e^{\mp i\phi}|\pm,\Nsw\rangle\langle\pm,\Nsw|$ is the spin precession operator. The molecules then enter the state readout laser that optically pumps the molecules with alternating polarisations $\hat{\epsilon}_{X}$ and $\hat{\epsilon}_{Y}$ (which are nominally linearly polarised and orthogonal) between $|\pm,\Nsw\rangle$ and $|C,\Psw_{\rm{read}}\rangle$. For each polarisation, the optical pumping results in a fluorescence count rate proportional to the projection of the state onto the bright state, $F_{X,Y}=fN_{0}|\langle B(\hat{\epsilon}_{X,Y},\Nsw,\Psw_{\rm{read}})|\psi_{f}\rangle|^{2}$ where $f$ is the photon detection efficiency, and $N_{0}$ is the number of molecules in the addressed $\Nsw$ level. We then compute the asymmetry, $\mathcal{A}=(F_{X}-F_{Y})/(F_{X}+F_{Y})$, dither the linear polarisation angles in the state readout laser beams to evaluate the fringe contrast, $\mathcal{C}=(\partial\mathcal{A}/\partial\phi)/2\approx-(\partial\mathcal{A}/\partial\theta_{\rm{read}})/2$, and extract the measured phase, $\Phi=\mathcal{A}/(2\mathcal{C})+q\pi/4$.\footnote{Recall $q$ is chosen to be an integer which depends on the size of the applied magnetic field.} We then report the result of the measurement in terms of an equivalent phase precession frequency $\omega=\Phi/\tau$ where $\tau\approx1^{\:}\mathrm{ms}$ is the spin precession time, which was measured for each block as described in section~\ref{sec:compute_phase}. Let us first consider the idealized case in which all laser polarisations are exactly linear, $\Theta_{i}=\pi/4$ for each laser $i\in\left\{{\rm prep},X,Y \right\}$, the angle between the state preparation laser polarisation (${\rm prep}$) and state readout basis ($X,Y$) is $\pi/4$, $\theta_{\rm{read}}-\theta_{{\rm prep}}=-\pi/4$, and the accumulated phase is small, $\left|\phi\right|\ll1$ (i.e. no magnetic field is applied). Under these conditions, the measured phase $\Phi$ is equal to the accumulated phase $\phi$. Now consider the effect of adding polarisation offsets $d\vec{\epsilon}_{i}$ to each of the three laser beams such that $\hat{\epsilon}_{i}\rightarrow\hat{\epsilon}_{i}+\kappa d\vec{\epsilon}_{i}$, where $\kappa = 1$ is a perturbation parameter. It is useful to cast the polarisation imperfections in terms of linear angle imperfections, $\theta_{i}\rightarrow\theta_{i}+\kappa d\theta_{i}$ and ellipticity imperfections, $\Theta_{i}\rightarrow\Theta_{i}+\kappa d\Theta_{i}$ where $S_{i}=-2d\Theta_{i}$ is the laser ellipticity Stokes parameter; these are related by \begin{equation} \frac{\hat{z}\cdot(\hat{\epsilon}_{i}\times d\vec{\epsilon}_{i})}{\hat{\epsilon}_{i}\cdot\hat{\epsilon}_{i}}=d\theta_{i}-id\Theta_{i}.\label{eq:extracting_polarization_imperfection_components} \end{equation} Note that laser polarisations can have a nonzero projection in the $\hat{z}$ direction, but we assume in the discussion above that $\hat{\epsilon}_{i}$ represents a normalized projection of the laser polarisation onto the $xy$ plane.\footnote{The $z$-component of the polarisation can only drive $\Delta M=0$ transitions, which are far off resonance from the state preparation/readout lasers.} With these polarisation imperfections in place, the measured phase $\Phi$ gains additional terms: \begin{equation} \Phi=\phi+\kappa(d\theta_{{\rm prep}}-\frac{1}{2}(d\theta_{X}+d\theta_{Y}))-\kappa^{2}\Psw_{{\rm prep}}\Psw_{\rm{read}}d\Theta_{{\rm prep}}(d\Theta_{X}-d\Theta_{Y})+O\left(\kappa^{3}\right),\label{eq:Measured_Phase_with_Polarization_Imperfections} \end{equation} up to second order in $\kappa$. In the eEDM measurement, we switch between two values of $\Psw\equiv\Psw_{\rm{read}}$, the parity of the excited state addressed during state readout, and we set $\Psw_{{\rm prep}}=+1$, the parity of the excited state addressed during state preparation. It is worth dwelling on equation \ref{eq:Measured_Phase_with_Polarization_Imperfections} for a moment. A rotation of all polarisations by the same angle leaves the measured phase unchanged: $d\theta_{i}\rightarrow d\theta_{i}+d\theta\implies\Phi\rightarrow\Phi$, as expected. A deviation in the relative angle between the state preparation and readout beams, $d\theta_{{\rm prep}}\rightarrow d\theta_{{\rm prep}}+d\theta$ and $d\theta_{X,Y}\rightarrow d\theta_{X,Y}-d\theta$, enters into the phase measurement as $\Phi\rightarrow\Phi+2d\theta$, but is benign so long as $d\theta$ is uncorrelated with the expected eEDM signal. The laser ellipticities affect the phase measurement only when the state readout beams differ in ellipticity, and this contribution to the phase can be distinguished from the others by switching the excited state parity, $\Psw$. This last term is particularly interesting because it allows for multiplicative couplings between polarisation imperfections in the state preparation and state readout beams to contribute to the measured phase. Although the polarisation imperfection terms in equation \ref{eq:Measured_Phase_with_Polarization_Imperfections} are uncorrelated with the $\Nsw\Esw$ and hence do not contribute to the systematic error, we will see in later sections that additional imperfections can lead to changes in the molecule state that is prepared or read-out that are equivalent to correlations $d\theta_{i}^{\N\E}$ and $d\Theta_{i}^{\N\E}$. The framework of equation \ref{eq:Measured_Phase_with_Polarization_Imperfections} is useful for understanding how these correlations result in systematic errors in the eEDM measurement extracted from $\Phi^{\N\E}$. \subsubsection{Stark interference between E1 and M1 transition amplitudes} \label{sssec:stark_interference_between_E1_and_M1_transition_amplitudes} In this section we describe in detail how interference between multipole transition amplitudes can lead to a measured phase that mimicks an eEDM spin precession phase. We develop a general framework illustrating how such phases depend on laser polarisation and pointing. In an applied electric field, opposite parity levels are mixed, allowing both odd parity (E1, M2,...) and even parity (M1, E2,...) electromagnetic multipole amplitudes to contribute when driving an optical transition. These amplitudes depend on the orientation of the electric field relative to the light polarisation $\hat{\epsilon}$ and the laser pointing direction $\hat{k}$. This Stark interference (S.I.) effect forms the basis of precise measurements of weak interactions through parity non-conserving amplitudes in atoms and molecules \cite{Bouchiat1974,Demille2008,Wood1997}. However, it can also generate a systematic error in searches for permanent electric dipole moments which look for spin precession correlated with the orientation of an applied electric field. These Stark interference amplitudes have been calculated and measured for optical transitions in Rb \cite{Chen1994,Hodgdon1991} and Hg \cite{Lamoreaux1992,Loftus2011}, and have been included in the systematic error analysis in the Hg EDM experiment \cite{Griffith2009,Swallows2013}. In this section, we consider Stark interference as a source of systematic errors in the ACME experiment. There are two important differences between molecular and atomic systems. First, molecular states such as the $H^{3}\Delta_{1}$ state in ThO can be highly polarisable and opposite parity states can be completely mixed by the application of a modest laboratory electric field. Second, molecular selection rules can be much weaker than atomic selection rules: the $H^{3}\Delta_{1}\rightarrow C^{3}\Pi_{1}$ transition that we drive is nominally an E1 forbidden spin-flip transition ($\Delta\Sigma=1$, where $\Sigma$ is the projection of the total electron spin $S=1$ onto the intermolecular axis), but these states have significant subdominant contributions from other spin-orbit terms \cite{Paulovic2003}, between some of which the E1 transition is allowed. Both of these effects significantly amplify the effect of Stark interference in molecules relative to atoms. In this section we will derive the effect of Stark interference on the measured phase $\Phi$. Consider a plane wave vector potential $\vec{A}$ with real amplitude $A_{0}$, oscillating at frequency $\omega$, that is resonant with a molecular optical transition $\left|g\right\rangle \rightarrow\left|e\right\rangle$,with wave vector $\vec{k}=\left(\omega/c\right)\hat{k}$, and complex polarisation $\hat{\epsilon}$: \begin{align} \vec{A}\left(\vec{r},t\right)= & A_{0}\hat{\epsilon}e^{i\vec{k}\cdot\vec{r}-i\omega t}+\mathrm{c.c}. \end{align} The interaction Hamiltonian $H_{\mathrm{int}}$ between this classical light field and the molecular system is given by: \begin{align} H_{\mathrm{int}}\left(t\right)= & -\sum_{a}\frac{e^{a}}{m^{a}}\vec{A}\left(\vec{r}^{\,a},t\right)\cdot\vec{p}^{a} \end{align} where $a$ indexes a sum over all of the particles in the system with charge $e^{a}$, mass $m^{a}$, position $\vec{r}^{\,a}$ and momentum $\vec{p}^{a}$. Typically we apply the multipole expansion on the transition matrix element between states $\left|g\right\rangle $ and $\left|e\right\rangle$; the matrix element can then be written as \begin{equation} \mathcal{M}\equiv\langle e|H_{\mathrm{int}}|g\rangle=iA_{0}\omega_{eg}\sum_{\lambda=1}^{\infty}\langle e|\hat{\epsilon}\cdot\vec{E}_{\lambda}+(\hat{k}\times\hat{\epsilon})\cdot\vec{M}_{\lambda}|g\rangle, \end{equation} where $\vec{E}_{\lambda}$ describes the electric interaction of order $O((\vec{k}\cdot\vec{r})^{\lambda-1})$ and $\vec{M}_{\lambda}$ describes the magnetic interaction of order $O(\alpha(\vec{k}\cdot\vec{r})^{\lambda-1})$ (where $\alpha$ is the fine structure constant) such that \begin{align} \vec{E}_{\lambda}= & \frac{\left(i\right)^{\lambda-1}}{\lambda!}\sum_{a}e^{a}\vec{r}^{\,a}\left(\vec{k}\cdot\vec{r}^{\,a}\right)^{\lambda-1},\\ \vec{M}_{\lambda}= & \frac{\left(i\right)^{\lambda-1}}{\left(\lambda-1\right)!}\sum_{a}\left(\frac{e^{a}}{2m^{a}}\right)\left[\left(\vec{k}\cdot\vec{r}^{\,a}\right)^{\lambda-1}\left(\frac{1}{\lambda+1}\vec{L}^{a}+\frac{1}{2}g^{a}\vec{S}^{a}\right)+\left(\frac{1}{\lambda+1}\vec{L}^{a}+\frac{1}{2}g^{a}\vec{S}^{a}\right)\left(\vec{k}\cdot\vec{r}^{\,a}\right)^{\lambda-1}\right],\nonumber \end{align} where $L^{a}$ is the orbital angular momentum, $S^{a}$ is the spin angular momentum, and $g^{a}$ is the spin g-factor for particle of index $a$ (see e.g.\ \cite{Sachs1987}). For typical atomic or molecular optical transitions, if all moments are allowed, we expect the dominant corrections to the leading order E1 transition moment to be on the order of M1/E1 $\sim\alpha\sim10^{-2}$--$10^{-3}$ and E2/E1$\sim ka_{0}\sim10^{-3}$--$10^{-4}$, where $a_0$ is the Bohr radius. In this work we neglect the higher order contributions beyond E2, though the effects may by evaluated by using the expansion above. \begin{center} \begin{table} \centering \begin{tabular}{cccc} \hline \multicolumn{4}{c}{}\tabularnewline \multicolumn{4}{c}{$\left\langle e\left|H_{\mathrm{int}}\left(O^{\lambda}\right)\right|g\right\rangle =iA_{0}\omega_{eg}\left[\hat{\epsilon}_{+1}^{*}\left\langle e\left|T_{+1}^{\lambda}\left(O^{\lambda}\right)\right|g\right\rangle +\hat{\epsilon}_{-1}^{*}\left\langle e\left|T_{-1}^{\lambda}\left(O^{\lambda}\right)\right|g\right\rangle \right]\cdot\vec{V}\left(O^{\lambda}\right)+\dots$}\tabularnewline \tabularnewline \hline \multirow{2}{*}{Term} & Tensor & \multirow{2}{*}{Molecular Operator, $O^{\lambda}$} & \multirow{2}{*}{Light Vector, $\vec{V}\left(O^{\lambda}\right)$}\tabularnewline & rank, $\lambda$ & & \tabularnewline \hline E1 & 1 & $\Sigma_{a}e^{a}r_{i}^{a}$ & $\hat{\epsilon}$\tabularnewline M1 & 1 & $\Sigma_{a}\frac{e^{a}}{2m^{a}}\left(L_{i}^{a}+g^{a}S_{i}^{a}\right)$ & $\hat{k}\times\hat{\epsilon}$\tabularnewline E2 & 2 & $\frac{\omega}{2c}\sum_{a}e^{a}r_{i}^{a}r_{j}^{a}$ & $\frac{i}{\sqrt{2}}\left[\hat{\epsilon}(\hat{k}\cdot\hat{z})+\hat{k}(\hat{\epsilon}\cdot\hat{z})\right]$\tabularnewline M2 & 2 & $\frac{\omega}{c}\Sigma_{a}\frac{e^{a}}{2m^{a}}\left\{ r_{i}^{a},\frac{1}{3}L_{j}^{a}+\frac{1}{2}g^{a}S_{j}\right\} $ & $\frac{i}{\sqrt{2}}\left[\hat{k}((\hat{k}\times\hat{\epsilon})\cdot\hat{z})+(\hat{k}\times\hat{\epsilon})(\hat{k}\cdot\hat{z})\right]$\tabularnewline \hline \end{tabular} \par \protect\caption{Only spherical tensor operators $T_{q}^{\lambda}$ with projection $q=\pm1$ contribute to the $\left|H\right\rangle \rightarrow\left|C\right\rangle$ transition amplitude. With this simplifying assumption, we can write the matrix element for each multipole operator in the form shown at the top of this table, which factors the molecule properties and the light properties (where $\hat{\epsilon}_{\pm}=\mp\left(\hat{x}\pm i\hat{y}\right)/\sqrt{2}$ are the spherical basis vectors, and $\hat{z}$ is the direction of the electric field). Here, the molecular operators $O^{\lambda}$ and the corresponding light vectors $\vec{V}\left(O^{\lambda}\right)$ are listed for the E1, M1, E2, and M2 operators.} \label{tab:spher_tens} \end{table} \par\end{center} During the state preparation and readout of the molecule state, transitions are driven between the state $|g\rangle=\sum_{\pm}d_{\pm}|\pm,\Nsw\rangle$ and $|e\rangle=|C,\Psw\rangle$, where $d_{\pm}$ are state amplitudes that denote the particular superposition in $\left|H\right\rangle $ that is being interrogated. The particular $d_{\pm}$ combination that results in $\mathcal{M}=0$ describes the state that is dark, and the orthogonal state is bright and is optically pumped away. It is convenient to expand the Hamiltonian $H_{\mathrm{int}}$ in terms of spherical tensor operators. Furthermore, the laser is only resonant with $\Delta M=\pm1$ transitions, so the spherical tensor operators with angular momentum projections other than $\pm1$ can be reasonably omitted. In table \ref{tab:spher_tens}, we factor the first 4 multipole operators into products of molecule and light field operators and express the molecular operators in terms of spherical tensors $T_{\pm1}^{\lambda}$ of rank $\lambda=1,2$. The E1 and M1 terms consist of vector operators with $\lambda=1$. The E2 and M2 operators are rank 2 cartesian operators which can have spherical tensor operator contributions for $\lambda=0,1,2$. The rank $\lambda=0$ components of the E2 and M2 operators, and the $\lambda=1$ component of the E2 operator, vanish. The rank $\lambda=1$ component of the M2 operator does not vanish, but the light field angular dependence of this operator is equivalent to E1, so we may treat it as such. Using well-known properties of angular momentum matrix elements \cite{Brown2003}, we may write the transition matrix element in the following form, \begin{align} \mathcal{M}= & iA_{0}\omega_{eg}c_{\rm E1}\frac{1}{\sqrt{2}}\left[\left(-1\right)^{J+1}\Psw\right]^{(1-\mathcal{\tilde{N}}\Esw)/2}\left(\hat{\epsilon}_{-1}^{*}d_{+}+\Psw\left(-1\right)^{J'}\hat{\epsilon}_{+1}^{*}d_{-}\right)\cdot\vec{\varepsilon}_{\mathrm{eff}}, \end{align} such that $\vec{\varepsilon}_{\mathrm{eff}}$ is the `effective E1 polarisation' (i.e. including the effects of interference between multipole transition matrix elements is equivalent to an E1 transition with this polarisation) with the form \begin{align} \vec{\varepsilon}_{\mathrm{eff}}=& \hat{\epsilon}-a_{\rm M1}i\hat{n}\times(\hat{k}\times\hat{\epsilon})+a_{\rm E2}(\Psw)i(\hat{k}(\hat{\epsilon}\cdot\hat{n})+\hat{\epsilon}(\hat{k}\cdot\hat{n}))+\dots\label{eq:effective E1 polarization} \end{align} where $\hat{n}=\Nsw\Esw\hat{z}$ is the orientation of the internuclear axis in the laboratory frame, $a_{\rm E2}(\Psw)=c_{\rm E2}(\Psw)/(\sqrt{2}c_{\rm E1})$ and $a_{\rm M1}=c_{\rm M1}/c_{\rm E1}$ are real dimensionless ratios describing the strength of the M1 and E2 matrix elements relative to E1, and the $c$ coefficients are matrix elements, \begin{align} c_{\rm E1}= & \left\langle C,J,0,1\left|\mathrm{E1}\right|H,J',1,1\right\rangle \\ c_{\rm M1}= & \left\langle C,J,0,1\left|\mathrm{M1}\right|H,J',1,1\right\rangle \\ c_{\rm E2}(\Psw)= & \left\langle C,J,0,1\left|\mathrm{E2}\right|H,J',1,1\right\rangle +\nonumber \\ & \Psw\left(-1\right)^{J}\left\langle C,J,0,1\left|\mathrm{\rm E2}\right|H,J',1,-1\right\rangle , \end{align} which are defined using the state notation $\left|A,J,M,\Omega\right\rangle $ for electronic state $A$, and `E1, M1, E2' refer to the corresponding molecular operators in table \ref{tab:spher_tens}. It is useful to define the Rabi frequency $\Omega_{\rm r}=|\mathcal{M}|$ as the magnitude of the amplitude connecting to the bright state, and the unit vector $\hat{\varepsilon}_{\mathrm{eff}}$ corresponding to the projection of $\vec{\varepsilon}_{\mathrm{eff}}$ onto the $xy$ plane, \begin{align} \hat{\varepsilon}_{\mathrm{eff}}= & \frac{\vec{\varepsilon}_{\mathrm{eff}}-(\vec{\varepsilon}_{\mathrm{eff}}\cdot\hat{z})\hat{z}}{\sqrt{|\vec{\varepsilon}_{\mathrm{eff}}|^{2}-|\vec{\varepsilon}_{\mathrm{eff}}\cdot\hat{z}|^{2}}}.\label{eq:unit vector} \end{align} This completely determines the bright and dark states, which have been previously defined in equations \ref{eq:bright_state} and \ref{eq:dark_state} for solely E1 transition matrix elements. The odd parity E1 and even parity M1 and E2 contributions to the effective polarisation differ by a factor of $\Nsw\Esw$, which is correlated with the expected eEDM signal. Expanding the effective E1 polarisation in terms of switch parity components, $\hat{\varepsilon}_{\mathrm{eff}}=\hat{\varepsilon}_{\mathrm{eff}}^{\rm{nr}}+\Nsw\Esw d\vec{\varepsilon}_{\mathrm{eff}}^{\N\E}$, and evaluating the effective $\Nsw\Esw$ correlated polarisation imperfections using equation \ref{eq:extracting_polarization_imperfection_components}, we find that the bright and dark states have effective polarisation correlations given by: \begin{align} \frac{\hat{z}\cdot(\hat{\varepsilon}_{\mathrm{eff}}^{\rm{nr}}\times d\vec{\varepsilon}_{\mathrm{eff}}^{\N\E})}{\hat{\varepsilon}_{\mathrm{eff}}^{\rm{nr}}\cdot\hat{\varepsilon}_{\mathrm{eff}}^{\rm{nr}}}\approx & \, d\theta_{\mathrm{eff}}^{\N\E}-id\Theta_{\mathrm{eff}}^{\N\E}\\ \approx & -i(a_{M1}-a_{E2}(\Psw))(\hat{\epsilon}\cdot\hat{z})((\hat{k}\times\hat{\epsilon})\cdot\hat{z}). \label{eq:state_correlations} \end{align} It is useful to use a particular parameterization of the laser pointing $\hat{k}$ and polarisation $\hat{\epsilon}$ to expand the expression in equation \ref{eq:state_correlations} in terms of pointing and polarisation imperfections. The state preparation laser $\hat{k}$-vector is aligned along (or against) the $\hat{z}$ direction in the laboratory, so it is convenient to parameterize the pointing deviation from normal by spherical angle $\vartheta_{k}$, and the direction of this pointing imperfection by polar angle $\varphi_{k}$ in the $xy$ plane, such that: \begin{align} \hat{k}= & \cos\varphi_{k}\sin\vartheta_{k}\hat{x}+\sin\varphi_{k}\sin\vartheta_{k}\hat{y}+\cos\vartheta_{k}\hat{z}. \label{eq:pointing_imperfection} \end{align} We may use a parameterization for the polarisation $\hat{\epsilon}$ that is similar to that in equation \ref{eq:polarization_parametrization}, but a slight modification is required to ensure that $\hat{k}\cdot\hat{\epsilon}=0$: \begin{align} \hat{\epsilon}= & N_{\epsilon}\left(-e^{-i\theta}\cos\Theta\hat{\epsilon}_{+1}+e^{i\theta}\sin\Theta\hat{\epsilon}_{-1}+\epsilon_{z}\hat{z}\right)\\ \epsilon_{z}= & -\frac{1}{\sqrt{2}}\tan\theta_{k}\left(e^{-i\left(\theta-\varphi_{k}\right)}\cos\Theta+e^{i\left(\theta-\varphi_{k}\right)}\sin\Theta\right) \end{align} where $N_{\epsilon}$ is a normalization constant that ensures that $\hat{\epsilon}^{*}\cdot\hat{\epsilon}=1$. With these parameterizations in place, and expanding about small ellipticities $d\Theta$ such that $\Theta=\pi/4+d\Theta$, and small laser pointing deviation, $\vartheta_{k}\ll1$, we find that the $\Nsw\Esw$-correlated effective laser polarisation imperfections are given by: \begin{align} d\theta_{\mathrm{eff}}^{\N\E}\approx & -\frac{1}{2}(a_{M1}-a_{E2}(\Psw))\vartheta_{k}^{2\:}S^{\:}\cos(2(\theta-\varphi_{k}))\\ d\Theta_{\mathrm{eff}}^{\N\E}\approx & -\frac{1}{2}(a_{M1}-a_{E2}(\Psw))\vartheta_{k}^{2\:}\sin(2(\theta-\varphi_{k})) \end{align} where $S_{i}=-2d\Theta_{i}$ describe the laser ellipticities. Hence, following equation \ref{eq:Measured_Phase_with_Polarization_Imperfections}, there is a systematic error in $\omega^{\mathcal{NE}}$: \begin{align} \omega_{\mathrm{S.I.}}^{\N\E}=&\frac{1}{\tau}\frac{1}{4}\left(a_{M1}-a_{E2}\left(\Psw\right)\right)\times\\&\left[\vartheta_{k,{\rm prep}}^{2}\left(-2S_{{\rm prep}}c_{{\rm prep}}+\Psw s_{{\rm prep}}\left(S_{X}-S_{Y}\right)\right)+\right.\\&\left.\vartheta_{k,X}^{2}\left(S_{X}c_{X}+\Psw S_{{\rm prep}}s_{X}\right)+\vartheta_{k,Y}^{2}\left(S_{Y}c_{Y}-\Psw S_{{\rm prep}}s_{Y}\right)\right] \end{align} where $c_{i}\equiv\cos\left(2(\theta_{i}-\varphi_{i,k})\right)$ and $s_{i}\equiv\sin\left(2(\theta_{i}-\varphi_{i,k})\right)$ describe the dependence of the systematic error on the difference between the linear polarisation angle $\theta_{i}$ and the pointing angle $\varphi_{i,k}$ in the $xy$ plane. There is another contribution to this systematic error that arises when the coupling to the off-resonant opposite parity excited state $|C,-\Psw\rangle$ is also taken into account. This additional contribution becomes significant when the ellipticities are comparable to or smaller than $\gamma_{C}/\Delta_{\Omega,C,J=1}\approx0.5\%$. The eEDM channel, $\omega^{\mathcal{NE}}$, was defined to be even under the superblock switches (including $\Psw$), hence those terms proportional to $\Psw$ in the equation above do not contribute to our reported result. Additionally, the $\Gsw$ and $\Rsw$ switches rotate the polarisation angles for each laser by roughly $\theta_{i}\rightarrow\theta_{i}+\pi/2$ periodically and the resulting $\omega^{\mathcal{NE}}$ signal is averaged over these states. Provided that the pointing drift is much slower than the timescale of these switches, and to the extent that the laser polarisations constituting the $\Rsw$ and $\Gsw$ states are orthogonal, then these systematic errors should dominantly contribute to the $\omega^{\mathcal{NEG}}$ and $\omega^{\mathcal{NER}}$ channels which were found to be consistent with zero (see Figure~\ref{fig:pixel_plot}). An indirect limit on the size of the systematic error due to Stark interference, $\omega^{\mathcal{NE}}_{\mathrm{S.I.}}$, may be estimated by assuming a reasonable suppression factor by which the effects in $\omega^{\N\E\R}$ and $\omega^{\N\E\G}$ may `leak' into $\omega^{\mathcal{NE}}$. We monitored the pointing drift on a beam profiler and observed pointing drifts up to $d\vartheta_k\sim 50~\upmu\rm{rad}$ throughout a full set of superblock states. The absolute pointing misalignment angle was not well known but was estimated to be larger than $\vartheta_k\gtrsim0.5~\rm{mrad}$. Hence we may estimate a conservative suppression factor $d\vartheta_k/\vartheta_k\lesssim1/10$ by which pointing drift may contaminate $\omega^{\mathcal{NE}}$ from $\omega^{\N\E\R}$ and $\omega^{\N\E\G}$. The two $\mathcal{\tilde{R}}$ states are very nearly orthogonal, but the $\Gsw$ states deviate sufficiently from orthogonal (see section~\ref{sssec:suppression_of_the_AC_stark_shift_phases}) such that the leakage from $\omega^{\N\E\G}\rightarrow\omega^{\N\E}$ will dominate the systematic error; we estimate a suppression factor of about $c_p^{\rm{nr}}/c_p^{\mathcal{G}}\sim s_p^{\rm{nr}}/s_p^{\mathcal{G}}\sim1/5$. Based on the upper limits on the measured values for $\omega^{\N\E\R}$ and $\omega^{\N\E\G}$ combined with leakage from $\omega^{\N\E\R}$ and $\omega^{\N\E\G}$ into $\omega^{\N\E}$ due to pointing drift, and leakage from $\omega^{\N\E\G}$ into $\omega^{\N\E}$ due to non-orthogonality of the two $\Gsw$ states, we estimate the possible size of the systematic error to be $\omega_{\mathrm{S.I.}}^{\N\E}\lesssim 1^{\:}\mathrm{mrad}/\mathrm{s}$. Note that the mechanism for this systematic error was not discovered until after the publication of our result \cite{Baron2014} and hence was not included in our systematic error analysis there. Furthermore, since we did not observe this effect, this systematic error does not match any of the inclusion criteria outlined in section \ref{ssec:total_systematic_error_budget} and hence is not included in the systematic error budget in this paper. Since we did not understand the mechanism for this systematic error while running the apparatus, we were not able to place direct limits on the size of this systematic error. We estimate that the absolute pointing deviation from ideal was at most $5^{\:}\mathrm{mrad}$ and the ellipticity of each laser was no more than $S_{i}\approx5\%$. The E1/M1 interference coefficient is $a_{M1}\approx0.1$ for the $H\rightarrow C$ transition. This gives an estimate of $\omega_{\mathrm{S.I.}}^{\N\E}\sim0.1^{\:}\mathrm{mrad}/\mathrm{s}$ before suppression due to the $\Rsw$ and $\Gsw$ switches. Hence, we do not believe that this systematic error significantly shifted the result of our measurement. \subsubsection{AC Stark shift phases} \label{sssec:AC_stark_shift_phases} In this section we describe contributions to the measured phase $\Phi$ that depend on the AC Stark shifts induced by the state preparation and readout lasers. We describe mechanisms by which such phase contributions may arise, and we describe mechanisms by which $\Nsw\Esw$ correlated experimental imperfections may couple to these phases to result in eEDM-mimicking phases. Concise descriptions of some of the effects described here can be found in \cite{Hess2014,SpaunThesis,HutzlerThesis}. During our search for systematic errors as described in section \ref{ssec:determining_systematic_uncertainty}, we empirically found that there was a contribution to the measured phase $d\Phi(\Delta,\Omega_{\rm r})$ that had an unexpected linear dependence on the laser detuning, $\Delta$, a quadratic dependence on laser detuning $\Delta$ in the presence of a nonzero magnetic field, and a linear dependence on small changes to the magnitude of the Rabi frequency, $d\Omega_{\rm r}/\Omega_{\rm r}$, in the presence of a nonzero magnetic field, \begin{align} d\Phi\left(\Delta,\Omega_{\rm r}\right)=&\sum_{i}\left[\alpha_{\Delta,i}\Delta_{i}+\alpha_{\Delta^{2},i}\Delta_{i}^{2}+\beta_{d\Omega_{{\rm r}, i}}(d\Omega_{{\rm r}, i}/\Omega_{{\rm r}, i})+\dots\right]. \label{eq:Empirical_AC_Stark_Shift_Phase_Result} \end{align} where $i\in\left\{ {\rm prep},X,Y \right\}$ indexes the state preparation and readout lasers. The coefficients we measured were $\alpha_{\Delta}\sim1^{\:}\mathrm{mrad}/(2\pi\times\mathrm{MHz})$, $\alpha_{\Delta^{2}}\sim1^{\:}\mathrm{mrad}/\left(2\pi\times\mathrm{MHz}\right)^{2}$ and $\beta_{d\Omega_{\rm r}}\sim10^{-3}$. We performed these measurements by independently varying the laser detunings $\Delta_i$ across resonance using AOMs or modulating the laser power using AOMs with the set-up depicted in figure \ref{fig:HC_transitions_setup} and extracting the measured phase $\Phi$. Examples of such measurements are given in figure~\ref{fig:phase_vs_detuning}. We determined that this behaviour can be caused by mixing between bright and dark states, due to a small non-adiabatic laser polarisation rotation or Zeeman interaction present during the optical pumping used to prepare and read out the spin state. The mixed bright and dark states differ in energy by the AC Stark shift, which leads to a relative phase accumulation between the bright and dark state components that depends on the laser parameters $\Delta$ and $\Omega_{\rm r}$. We shall now derive the AC Stark shift phase that results in equation \ref{eq:Empirical_AC_Stark_Shift_Phase_Result}, under simplifying assumptions amenable to analytic calculations. Consider a three level system consisting of the bright $|B(\hat{\varepsilon},\Nsw,\Psw)\rangle$ and dark $|D(\hat{\varepsilon},\Nsw,\Psw)\rangle$ states and the lossy excited state $|C,\Psw\rangle$ with decay rate $\gamma_{C}$. For simplicity, assume that there is no applied magnetic field for the time being. In this system, the instantaneous eigenvectors (depicted in figure \ref{fig:bases}C) are \begin{align} |B_{\pm}\rangle\equiv & \pm\kappa_{\pm}|C,\Psw\rangle+\kappa_{\mp}|B(\hat{\varepsilon},\Nsw,\Psw)\rangle,^{\:\:}|D\rangle\equiv|D(\hat{\varepsilon},\Nsw,\Psw)\rangle, \label{eq:inst_eigv} \end{align} and the instantaneous eigenvalues are \begin{align} E_{B\pm}= & \frac{1}{2}\left(\Delta\pm\sqrt{\Delta^{2}+\Omega_{\rm r}^{2}}\right),^{\:}E_{D}=0, \label{eq:inst_eig} \end{align} such that the mixing amplitudes $\kappa_{\pm}$ are given by \begin{align} \kappa_{\pm}= &\frac{1}{\sqrt{2}} \sqrt{1\pm\frac{\Delta}{\sqrt{\Delta^{2}+\Omega_{\rm r}^{2}}}}. \end{align} The effect of the decay of the excited state (which occurs almost entirely to states outside of the three level system) may be taken into account by adding an anti-Hermitian operator term in the Schrodinger equation, $|\dot{\psi}\rangle=-i(H-i\frac{1}{2}\Gamma)|\psi\rangle$, where $\Gamma=\gamma_{C}|C,\Psw\rangle\langle C,\Psw|$ is the decay operator. This formulation is equivalent to the Lindblad master equation, \begin{align} \dot{\rho}= & -i\left[H,\rho\right]-\frac{1}{2}\left\{ \Gamma,\rho\right\} , \end{align} that governs the time evolution of the density matrix $\rho=|\psi\rangle\langle\psi|$. In practice, we implement this decay term by calculating the time evolution of the system according to $H$, and then making the substitution $\Delta\rightarrow\Delta-i\gamma_C/2$ before calculating squares of amplitudes. It is useful to work in the dressed state basis, $|D\rangle$, $|B_\pm\rangle$, (basis C in figure \ref{fig:bases}) because these are nearly stationary states and have simple time evolution in the case that laser polarisation and Rabi frequency are stationary. If we allow the laser polarisation to vary in time, then the dressed state basis varies in time, and the system evolves according to the Hamiltonian, \begin{align} \tilde{H}= & UHU^{\dagger}-iU\dot{U}^{\dagger}, \end{align} where $U$ is the transformation from time independent basis A to time dependent basis C (from figure \ref{fig:bases}), $UHU^{\dagger}$ is diagonal, and $-iU\dot{U}^{\dagger}$ is a fictitious force term that arises because we are working in a non-inertial frame when the laser polarisation is time dependent \cite{Budker2008}. Assuming that the polarisation is nearly linear, $\Theta\approx\pi/4$, but allowing the polarisation to rotate slightly, and allowing for a nonzero two photon detuning due to the Zeeman shift $\delta=-g_1\mu_{\rm B}\B_z\Bsw$, the Hamiltonian in the dressed state picture is: \begin{align} \tilde{H}= & \left(\begin{array}{ccc} 0 & -i\dot{\chi}^{*}\kappa_{+} & -i\dot{\chi}^{*}\kappa_{-}\\ i\dot{\chi}\kappa_{+} & E_{B-} & -\frac{i}{2}\frac{\dot{\Omega_{\rm r}}\Delta-\Omega_{\rm r}\dot{\Delta}}{\Delta^{2}+\Omega^{2}}\\ i\dot{\chi}\kappa_{-} & \frac{i}{2}\frac{\dot{\Omega_{\rm r}}\Delta-\Omega_{\rm r}\dot{\Delta}}{\Delta^{2}+\Omega_{\rm r}^{2}} & E_{B+} \end{array}\right)\begin{array}{c} \ket{D} \\ \ket{B_{-}} \\ \ket{B_{+}} \end{array} \end{align} where $\dot{\chi}=\dot{\Theta}-i(\dot{\theta}+\delta)$ can be considered to be a complex polarisation rotation rate, $\dot{\Omega}_{\rm r}$ is the rate of change of the Rabi frequency, and $\dot{\Delta}$ is the rate of change of the detuning. Note that this Hamiltonian implies that the effect of a two photon detuning arising from the Zeeman shift is equivalent to that of a linear polarisation rotating at a constant rate. \begin{figure}[htbp] \centering \includegraphics[width=15cm]{various_bases_used.pdf} \caption{Energy level diagrams depicting the Hamiltonian when the three-level $H\rightarrow C$ transition is addressed by the state preparation or readout lasers in three different bases. Solid double-sided blue arrows denote strong laser couplings between $H$ and $C$. Wiggly red arrows denote spontaneous emission from $C$ to states outside of the three level system. Dashed orange lines denote weak couplings induced by laser polarisation rotation. Basis A is useful for describing the spin precession phase induced by the Zeeman and eEDM Hamiltonians. Basis B is useful for describing the states that are prepared and read-out in the spin precession measurement. Basis C is useful for evaluating the AC Stark Shift phases induced by laser polarisation rotations.} \label{fig:bases} \end{figure} We may then apply first order time-dependent perturbation theory in this picture to determine the extent of bright/dark state mixing due to $\dot{\chi}$ in the time evolution of the system. If we parameterize the time-dependent state as \begin{align} \left|\psi\left(t\right)\right\rangle = & c_{D}\left(t\right)\left|D\right\rangle +c_{B+}\left(t\right)\left|B_{+}\right\rangle +c_{B-}\left(t\right)\left|B_{-}\right\rangle, \end{align} then in the case of a uniform laser field $\dot{\Omega}_{\rm r}=0$, of duration $t$ and with a constant detuning $\dot{\Delta}=0$, the time evolution of the coefficients is given at first order by: \begin{align} c_{D}\left(t\right)= & c_{D}\left(0\right)-\sum_{\pm}\int_{0}^{t}\dot{\chi}^{*}\left(t'\right)\kappa_{\mp}\left(t'\right)e^{-iE_{B\pm}t'}c_{B\pm}\left(0\right)\enspace dt'\label{eq:first order perturbation theory dark state}\\ c_{B\pm}\left(t\right)= & e^{-iE_{B\pm}t}c_{B\pm}\left(0\right)+e^{-iE_{B\pm}t}\int_{0}^{t}\dot{\chi}\left(t'\right)\kappa_{\mp}\left(t'\right)e^{iE_{B\pm}t'}c_{D}\left(0\right)\enspace dt'.\label{eq:first order perturbation theory bright state} \end{align} In the state preparation region, the molecules begin in an incoherent mixture of the states $|B(\hat{\varepsilon}_{{\rm prep}},\Nsw,\Psw)\rangle$ and $|D(\hat{\varepsilon}_{{\rm prep}},\Nsw,\Psw)\rangle$ and then enter the state preparation laser beam. In the ideal case of uniform laser polarisation, molecules that were in the bright state are optically pumped out of the three level system, and molecules that are in the dark state remain there; this results in a pure state, $|D(\hat{\varepsilon}_{{\rm prep}},\Nsw,\Psw)\rangle$. However, if there is a small polarisation rotation by amount $d\chi\equiv\int_{0}^{t}\dot{\chi}\left(t'\right)dt'\equiv d\Theta-i(d\theta-g_1\mu_{\rm B}\B_z\Bsw t)$, such that $\left|d\chi\right|\ll1$, then the dark state obtains a bright state admixture that may not be completely optically pumped away before leaving the laser beam.\footnote{This is most liable to occur just before a molecule leaves the laser beam, such that complete optical pumping does not occur.} In this case, the final state can be written as \begin{align} |D(\hat{\varepsilon}'_{{\rm prep}},\Nsw,\Psw)\rangle= & |D(\hat{\varepsilon}_{{\rm prep}},\Nsw,\Psw)\rangle+d\chi\Pi|B(\hat{\varepsilon}_{{\rm prep}},\Nsw,\Psw)\rangle \end{align} where $\hat{\varepsilon}'_{{\rm prep}}$ is the effective polarisation that parameterizes the initial state in the spin precession region \begin{align} \hat{\varepsilon}'_{{\rm prep}}= & \hat{\varepsilon}_{{\rm prep}}+d\chi\Pi i\hat{z}\times\hat{\varepsilon}_{{\rm prep}}^{*}, \end{align} and $\Pi$ is an amplitude that accounts for the AC Stark shift phase and the time dependent dynamics of the non-adiabatic mixing due to the polarisation rotation, \begin{equation} \Pi=\sum_{\pm}(\kappa_{\mp})^{2}e^{-iE_{B\pm}t}\int_{0}^{t}dt'^{\:}\frac{\dot{\chi}\left(t'\right)}{d\chi}e^{iE_{B\pm}t'}.\label{eq:Pi_def} \end{equation} The deviations between the effective polarisation and the actual laser polarisation can be viewed as effective polarisation imperfections, \begin{align} d\theta_{{\rm prep},\mathrm{eff}}= & -d\Theta_{{\rm prep}}\text{Im}\Pi+(d\theta_{{\rm prep}}-g_1\mu_{\rm B}\B_z\Bsw t)\text{Re}\Pi,\\ d\Theta_{{\rm prep},^{\:}\mathrm{eff}}= & -d\Theta_{{\rm prep}}\text{Re}\Pi-(d\theta_{{\rm prep}}-g_1\mu_{\rm B}\B_z\Bsw t)\text{Im}\Pi, \end{align} that lead to shifts in the measured phase $\Phi$ as described in equation \ref{eq:Measured_Phase_with_Polarization_Imperfections}. For definiteness, consider the case in which the polarisation rotation rate $\dot{\chi}(t')=d\chi/t$ is a constant for the duration of the optical pumping pulse $t$. In this case, \begin{align} \Pi= & \sum_{\pm}(\kappa_{\mp})^{2}e^{-iE_{B\pm}t/2}\mathrm{sinc}(E_{B\pm}t/2). \end{align} This function has the property that $\text{Im}\Pi$ is an odd function in $\Delta$ that can take on values up to order unity across resonance (a frequency range on the order of $\gamma_{C}$) and is exactly zero on resonance. $\text{Re}\Pi$ is an even function quadratic in $\Delta$ about resonance, and depends on Rabi frequency on resonance. If the laser beam intensity reduces quickly as the molecule leaves it then most of the AC Stark shift phase arises from the last Rabi flopping period before the molecule exits the laser beam (provided $\dot{\chi}$ is nonzero during that time). If the intensity reduces slowly, the AC Stark shift phase can be exacerbated since the bright state amplitude is not as effectively optically pumped away while $\Omega_{\rm r}<\gamma_C$. Nevertheless, beamshaping tests shown in figure \ref{fig:phase_vs_detuning} and numerical simulations indicate that $\Pi$ is not very sensitive to the shape of the spatial intensity profile of the laser beam or the shape of the spatial variation of the polarisation. If we consider only the first order contribution to the shift in the measured phase in equation \ref{eq:Measured_Phase_with_Polarization_Imperfections}, $d\theta_{{\rm prep},\mathrm{eff}}$, and neglect the second order shift that arises due to $d\Theta_{{\rm prep},\mathrm{eff}}$, then we can relate the parameters in equation \ref{eq:Empirical_AC_Stark_Shift_Phase_Result} to the amplitude $\Pi$ accounting for the AC Stark shift phase and the complex polarisation rotation $d\chi$, by \begin{align} \alpha_{\Delta,{\rm prep}}\approx&-\frac{\partial\text{Im}\Pi}{\partial\Delta_{{\rm prep}}}d\Theta_{{\rm prep}}\\ \alpha_{\Delta^{2},{\rm prep}}\approx&\frac{\partial^{2}\text{Re}\Pi}{\partial\Delta_{{\rm prep}}^{2}}\left(d\theta_{{\rm prep}}-g\mu_{\rm B}\B_{z}\tilde{\B}t\right)\\ \beta_{d\Omega_{\rm r},{\rm prep}}\approx&\Omega_{\rm r}\frac{\partial\text{Re}\Pi}{\partial\Omega_{\rm r}}\left(d\theta_{{\rm prep}}-g\mu_{\rm B}\B_{z}\tilde{\B}t\right).\label{eq:bdOmega} \end{align} We can interpret these results as follows. The linear dependence of the measured phase on detuning, $\alpha_{\Delta,{\rm prep}}$, comes from a spatially varying ellipticity in the $x$ direction coupling to the AC Stark shift phase. Similarly, the quadratic dependence of $\Phi$ on $\Delta$, $\alpha_{\Delta^{2},{\rm prep}}$, and the dependence of $\Phi$ on a relative change in $\Omega_{\rm r}$, $\beta_{d\Omega_{\rm r}, {\rm prep}}$, come from either a spatially varying linear polarisation in the $x$ direction or a Zeeman shift, each coupling to the AC Stark shift phase. Here, we only analyzed the phase shift that results from AC Stark shift effects in the state preparation laser beam, but there is an analogous phase shift in the state readout beam. There are several other subdominant effects that also contribute to the AC Stark shift phase behavior described in equation \ref{eq:Measured_Phase_with_Polarization_Imperfections} in the presence of polarisation imperfections. The opposite parity excited state $|C,-\Psw\rangle$ couples strongly to the dark state, but the mixing between these two states is weak because the transition frequency is off-resonant by a detuning $\Delta_{\Omega,C,J=1}\approx2\pi\times51~\mathrm{MHz}\gg\gamma_{C}$. In the case that an optical pumping laser has nonzero ellipticity, the bright state gains a weak coupling to the opposite-parity excited state proportional to this ellipticity. Then, two-photon bright-dark state mixing ensues in such a way that the mixing amplitude, and hence the measured phase, depends on the laser detuning. The rapid polarisation switching of the state readout beam can also introduce AC Stark shift-induced phases in the absence of a polarisation gradient, if the average ellipticity between the two polarisations is nonzero. Suppose a particular molecule is first excited by the $\hat{\epsilon}_{X}$ polarised beam. The two bright eigenstates $\ket{B_{\pm}}$ are mostly optically pumped away, resulting in a fluorescence signal $F_{X}$. The population remaining in the bright eigenstates acquires a phase relative to the dark state, due to the AC Stark shift. Then the molecules are optically pumped by the $\hat{\epsilon}_{Y}$ polarised beam. If there is a nonzero average ellipticity, $\hat{\epsilon}_{Y}$ is not quite orthogonal to $\hat{\epsilon}_{X}$ and the new bright eigenstates that give rise to the fluorescence signal $F_Y$ are superpositions of the former bright and dark states that acquired a relative AC Stark shift phase. This results in a fluorescence signal, and hence measured phase component, that depends linearly on laser detuning $\Delta$. \subsubsection{Polarisation Gradients from Thermal Stress-Induced Birefringence} \label{sssec:polarization_gradients_from_thermal_stress_induced_birefringence} \hspace*{\fill} \\ The AC Stark shift phases described in the previous section can be induced by polarisation gradients in $\hat{x}$ across the state preparation and readout laser beams. In this section we describe a known mechanism by which these arose. Recall that these laser beams passed through transparent, ITO-coated electric field plates. For an absorbance $\alpha$ and laser intensity $I$, the rate of heat deposition into the plates is $\dot{Q}\left(x,y\right)=\alpha\, I\left(x,y\right)$. The laser beam profile is stretched in the $y$ direction to ensure that all molecules are addressed. For simplicity we assume that the heating distribution, $\dot{Q}\left(x,y\right)=\dot{Q}\left(x\right)$, is completely uniform in the $y$ direction. We also assume that there are no shear stresses, i.e.\ local expansion of the glass is isotropic. Under these assumptions, the relationship between the heating rate, $\dot{Q}$, and the internal stress tensor $\sigma_{ij}$ (where $i,j$ are Cartesian indices) is \begin{equation} \frac{\partial^{2}\sigma_{yy}}{\partial x^{2}}= \frac{E\alpha_{V}}{\kappa}\dot{Q}\left(x\right), \end{equation} where $E$, $\alpha_{V}$ and $\kappa$ are the Young's modulus, coefficient of thermal expansion, and thermal conductivity, respectively \cite{Barber2010}. Unit vectors $\hat{x}$ and $\hat{y}$ correspond to the principal axes of the stress tensor due to the symmetry of the heating function, hence the off-diagonal (shear) elements are zero, $\sigma_{xy}=0$. The other diagonal component, $\sigma_{xx}$, is uniform across the plates, and equal to $\sigma_{yy}$ far away from the laser. The stress-optical law states that the birefringence and stress are linearly proportional along the principal axes of the stress tensor \cite{Dally1991}. The difference between the indices of refraction in the $x$ and $y$ directions is then $\Delta n=K\left(\sigma_{xx}-\sigma_{yy}\right)$, where $K\approx4\times10^{-6}\,\mbox{MPa}^{-1}$ is the stress-optical coefficient for Borofloat glass \cite{Schott2013b}. The retardance of an incident laser beam of index $i$ is $\Gamma_i=2\pi\Delta n\left(t/\lambda\right)$, where $t$ is the thickness of the field plates (in the $z$ direction), and $\lambda$ is the wavelength of light. Hence, in this limit, the retardance due to thermal stress-induced birefringence is related to the laser intensity by: \begin{equation} \frac{\partial^{2}\Gamma}{\partial x^{2}}=\eta\frac{t}{\lambda}I\left(x\right), \label{eq:retarddiff} \end{equation} where $\eta=2\pi KE\alpha_{V}\alpha/\kappa\approx26\times10^{-6}$~W$^{-1}$ is a material constant of Borofloat glass \cite{Schott2013b}. The ellipticity imprinted on the nominally linearly polarised laser beam is given by \begin{equation} S_i=\Gamma_i(x)\sin\left(2(\theta_i-\phi_{\Gamma,i})\right), \label{eq:s3} \end{equation} where $\theta_i$ is the linear polarisation angle and $\phi_{\Gamma,i}$ is the orientation of the fast axis of the birefringent material (nominally $\hat{x}$ in our case). Assuming the laser has total power $P$, a Gaussian profile in $x$ with standard deviation $w_{x}$, and a top-hat profile in $y$ with half width $w_{y}$, the intensity is given by \begin{equation} I\left(x\right)= \frac{P}{\sqrt{8\pi}w_{x}w_{y}}e^{-\frac{x^{2}}{2w_{x}^{2}} \end{equation} where $2w_y \gg w_x$. There is then an analytic solution to equation~\ref{eq:retarddiff} from which we extracted a retardance gradient in the laser tail, $x=w_{x}$, of \begin{equation} \frac{\partial\Gamma}{\partial x}\approx\frac{{\rm erf}(1/\sqrt{2})P\kappa t}{4w_y\lambda}\approx0.03~\mathrm{rad}/{\rm mm} \label{eq:retardance_gradient} \end{equation} for a nominal laser power of ${\approx}2$~W. Similar results were obtained from numerical finite element analysis. Thermal stress-induced birefringence has been observed in similar systems such as in UHV vacuum windows \cite{Solmeyer2011}, laser output windows \cite{Eisenbach1992}, and Nd:YAG rods \cite{Koechner1970}. \begin{figure} \begin{centering} \includegraphics[width=10cm]{Ellipticity_and_Thermo_elastic_model.pdf} \par\end{centering} \caption{Measurement of the ellipticity, $S$, as a function of position along $x$ within the state readout laser beam. A fit to the thermo-elastic model, which assumes a Gaussian laser profile and has the amplitude and offset in $S$ as free parameters, is overlaid.} \label{ite:birefringence} \end{figure} The estimates of the ellipticity gradient agree well with measurements of the polarisation of the beam, as shown in figure~\ref{ite:birefringence}. These polarimetry measurements were adapted from the procedure described in \cite{Berry1977}; a polarimeter was constructed consisting of a rotating quarter-wave plate, fixed polariser, and fast photodiode. The use of a fast photodetector allows for polarimetry of the probe beam during the 100~kHz polarisation switching. The resolution of the system was such that we could quickly measure the normalized circular Stokes parameter, $S$, to a few percent, which is sufficient to measure typical birefringence gradients of ${\sim}10\%$ across the beam. \subsubsection{Suppression of AC Stark Shift Phases} \label{sssec:suppression_of_the_AC_stark_shift_phases} \hspace*{\fill} \\ We were able to suppress the magnitude of the AC Stark shift phases in several different ways that are illustrated in figure~\ref{fig:phase_vs_detuning}. The ellipticity gradient across the state preparation laser beam was suppressed by tuning the linear polarisation angle: as per equation~\ref{eq:s3}, the ellipticity gradient is proportional to $\sin(2\theta_{{\rm prep}}-2\phi_{\Gamma,{\rm prep}})$, which vanishes when the polarisation is aligned along a birefringence axis, i.e. $\theta_{{\rm prep}}=\phi_{\Gamma,{\rm prep}},\phi_{\Gamma,{\rm prep}}+\pi/2$. To determine $\phi_{\Gamma,{\rm prep}}$ we measured the total accumulated phase as a function of laser detuning for various $\theta_{{\rm prep}}$ and then extracted the slope $\alpha_{\Delta,{\rm prep}}^{\rm{nr}}=\partial\Phi^{\rm{nr}}/\partial\Delta_{{\rm prep}}$ for small detuning values. Note that when fitting the phase vs.\ detuning data we found that cubic functions provided significantly better fits over the detuning ranges used (see Figure~\ref{fig:phase_vs_detuning}(B)). We then selected $\theta_{{\rm prep}}$ to minimize $\alpha_{\Delta,{\rm prep}}^{\rm{nr}}$. This suppressed $\alpha_{\Delta,{\rm prep}}^{\rm{nr}}$ by about a factor of 50 relative to its original value, to $\alpha_{\Delta,{\rm prep}}^{\rm{nr}}\lesssim0.1$~mrad/(2$\pi~\times$~MHz). Another method implemented to suppress AC Stark shift phases was to reduce the time-averaged power of the state preparation laser incident on the field plates. We used a chopper wheel to modulate the laser at 50~Hz, synchronous with the molecular beam pulses, with a 50\% duty cycle. We estimated the time scale for thermal changes to be on the order of $Q/\dot{Q}\sim2\rho Cw_{x}^{2}/\kappa\sim10^{\:}\mathrm{s}$, where $\rho$ and $C$ are the density and heat capacity of Borofloat respectively, so did not anticipate any significant transient effects to be introduced. This modification reduced the retardance gradient, and hence the value of $\alpha_{\Delta,{\rm prep}}^{\rm{nr}}$, by about a factor of two, as shown in Figure~\ref{fig:phase_vs_detuning}(C). Finally, $\alpha_{\Delta,{\rm prep}}^{\rm{nr}}$ was suppressed by shaping of the laser beam intensity profile. AC Stark shift phases were most significant at the downstream edge of the state preparation laser beam. Here, the intensity is such that bright-dark state mixing is still occurring but the bright state is not efficiently optically pumped away. By making the spatial intensity profile drop off more rapidly, we reduced the time that molecules spent in this intermediate intensity regime. This was achieved by taking advantage of the aspherical distortion introduced by misaligning a telescope immediately before the laser beam entered the spin-precession region. This suppressed $\alpha_{\Delta,{\rm prep}}$ and $\beta_{\Delta^{2},{\rm prep}}$ by ${\approx}2$, as shown in Figure~\ref{fig:phase_vs_detuning}(C). In addition to a phase suppression, we noticed that the optimal laser polarisation angle changed after implementing the steps described, as can be seen in Figure~\ref{fig:phase_vs_detuning}(C). The reason for this change is not definitively known, but we suspect that as we suppressed the birefringent contribution to the AC Stark shift phase, the non-birefringent contributions (i.e. the phase due to nonzero ellipticity causing bright-dark state mixing via the off-resonant opposite parity excited state) became fractionally larger, and we needed to tune the polarisation angle to obtain cancellation between these two classes of effects. \begin{figure}[htbp] \centering \includegraphics[width=15cm]{phase_vs_detuning.pdf} \caption{(A) Measured molecule phase as a function of preparation laser detuning. The slope agrees with originally observed $\Phi^{\N\E}$ dependence on $\Delta^{\N\E}$. (B) Phase dependence on detuning for multiple preparation laser polarisation angles. (C) $\partial \Phi/\partial \Delta^{\rm{nr}}$ shows clear sinusoidal dependence on preparation laser polarisation. The magnitude of $\partial \Phi/\partial \Delta^{\rm{nr}}$ decreases for all polarisation angles when the Gaussian beam tails are clipped (blue) and when the average laser power is reduced with a chopper wheel (red).} \label{fig:phase_vs_detuning} \end{figure} We observed much smaller AC Stark shift phases in the state readout laser beam than in the state preparation laser beam. This is not surprising since the effect is largely birefringent; the contributions to the effective polarisation imperfections for the $\hat{\epsilon}_{X}$ and $\hat{\epsilon}_{Y}$ polarised lasers should be opposite in sign, $d\theta_{X}\propto\sin(2(\theta_{\rm{read}}-\phi_{\Gamma,\rm{read}}))$, $d\theta_{Y}\propto\sin(2(\theta_{\rm{read}}-\phi_{\Gamma,\rm{read}}+\pi/2))$, such that they cancel each other in the measured phase (cf.\ equation~\ref{eq:Measured_Phase_with_Polarization_Imperfections}). The residual AC Stark shift phases measured in the state readout beam gave $\alpha_{\Delta,\rm{read}}^{\rm{nr}}\approx0.5^{\:}\mathrm{mrad}/(2\pi\times\mathrm{MHz})$. This was sufficiently small that the methods of suppression described above were only implemented in the state preparation region. \subsubsection{Systematic Errors due to Correlated Laser Parameters} \label{sssec:correlated_laser_parameters} In the discussion above, we described how polarisation imperfections can lead to contributions to the measured phase that depend on the AC Stark shifts and hence on the laser detunings $\Delta_{i}$ and Rabi frequencies $\Omega_{{\rm r}, i}$. However, these phases only produce a systematic error in $\omega^{\mathcal{NE}}$ if there is a nonzero correlation $\Delta_{i}^{\N\E}$ or $\Omega_{{\rm r}, i}^{\N\E}$ of the laser detuning or Rabi frequency. We observed such correlations and discuss them in this section. We will also describe how we evaluated the associated systematic errors. In section~\ref{sec:state_prep_read} (see figure~\ref{fig:Enr_wNE}) we discussed how a non-reversing component of the applied electric field, $\E^{\rm{nr}}$, could produce a $\Delta^{\N\E}$. In an entirely analogous manner, the Rabi frequency magnitude $\Omega_{\rm r}$ of the $H\rightarrow C$ transition can exhibit the following correlations: \begin{align} \Omega_{{\rm r}, i}=&\Omega_{{\rm r}, i}^{\rm{nr}}+\Nsw\Omega_{{\rm r}, i}^{\N}+\Nsw\Psw\Omega_{{\rm r}, i}^{\N\P}+\Nsw\Esw\Omega_{{\rm r}, i}^{\N\E}+\dots \end{align} Here, $\Omega_{{\rm r}, i}^{\rm{nr}}$ is the dominant component of the Rabi frequency for laser $i\in\left\{ {\rm prep},X,Y\right\} $, which could fluctuate in time on the order of 5\% due to laser power instability. $\Omega_{{\rm r}, i}^{\N}$ is generated by a laser power difference between the $\tilde{\N}$ states. This arose because we routed the laser light along different paths through a series of AOMs for each state. We measured this effect with photodiodes and found that the largest fractional power correlation was $\Omega_{\rm r}^{\N}/\Omega_{\rm r}^{\rm{nr}}\approx2.5\times10^{-3}$. An additional contribution to $\Omega_{{\rm r}, i}^{\N}$ and a contribribution to $\Omega_{{\rm r}, i}^{\N\P}$ on the same order arises due to Stark mixing between rotational levels in $H$ and $C$, leading to $\Nsw$- and $\Nsw\Psw$-correlated transition amplitudes on the $H\rightarrow C$ transition. Although we did not observe a laser power correlation with $\Nsw\Esw$ we did observe signals consistent with a Rabi frequency correlation, $\Omega_{\rm r}^{\N\E}$. A nonzero $\Nsw\Esw$-correlated fluorescence signal (as defined in section \ref{sec:signal_asymmetry}) that also reversed with the laser propagation direction $\hat{k}\cdot\hat{z}$, $F^{\N\E}/F^{\rm{nr}}\approx-(2.4\times10^{-3})(\hat{k}\cdot\hat{z})$, together with a nonzero $\omega^{\N\E\B}\approx(2.5{}^{\:}\mathrm{mrad}/\mbox{s})(\B_{z}/\mathrm{mG})(\hat{k}\cdot\hat{z})$, provided the first evidence that a nonzero $\Omega_{\rm r}^{\N\E}$ existed in our system. We believe that this fluorescence correlation arises from a linear dependence of the fluorescence signal size on Rabi frequency, $F^{\N\E}=(\partial F/\partial\Omega_{\rm r}^{\rm{nr}})\Omega_{\rm r}^{\N\E}$, which is nonzero since the state readout transitions were not fully saturated. We believe that the signal in $\omega^{\N\E\B}$ was caused by a coupling between the Rabi-frequency correlation and the $\B$-odd AC Stark shift phase, $\omega^{\N\E\B}=\frac{1}{\tau}\beta_{d\Omega_{\rm r}}^{\B}\B_{z}(\Omega_{\rm r}^{\N\E}/\Omega_{\rm r}^{\rm{nr}})$. We were able to verify a linear dependence of both of these channels on $\Omega_{\rm r}^{\N\E}$ by intentionally correlating the laser intensity with $\Nsw\Esw$ using AOMs; this is shown for the $\Phi^{\N\E\B}$ channel in Figure~\ref{fig:Omega_NE}. Varying the size of this artificial $\Omega_{\rm r}^{\N\E}$ allowed us to measure the value present in the experiment under normal operating conditions, $\Omega_{\rm r}^{\N\E}/\Omega_{\rm r}^{\rm{nr}}=(-8.0\pm0.8)\times10^{-3}(\hat{k}\cdot\hat{z})$. $\Omega_{\rm r}^{\N\E}$ can couple to $\beta_{d\Omega_{\rm r},i}^{\rm{nr}}$ as per equations~\ref{eq:Empirical_AC_Stark_Shift_Phase_Result} and \ref{eq:bdOmega} to result in a systematic error in $\omega^{\mathcal{NE}}$. A nonzero $\beta_{d\Omega_{\rm r},i}^{\rm{nr}}$ can be produced by a linear polarisation angle gradient (not observed in the experiment) or by a non-reversing Zeeman shift component $g_1\mu_{\rm B}\B_{z}^{\rm{nr}}$. While searching for a model to explain the intrinsic $\Omega_{\rm r}^{\N\E}$, we developed the Stark interference model presented in section \ref{sssec:stark_interference_between_E1_and_M1_transition_amplitudes}. For unnormalized effective polarisation $\vec{\varepsilon}_{\mathrm{eff}}=\vec{\varepsilon}_{\mathrm{eff}}^{\rm{nr}}+\Nsw\Esw d\vec{\varepsilon}_{\mathrm{eff}}^{\N\E}$, this model predicts $\Omega_{\rm r}^{\N\E}/\Omega_{\rm r}^{\rm{nr}}\approx\text{Re}(\vec{\varepsilon}_{\mathrm{eff}}^{\rm{nr} *}\cdot d\vec{\varepsilon}_{\mathrm{eff}})\approx-\text{Im}\left[(a_{M1}+a_{E2})\right](\hat{k}\cdot\hat{z})$, which correctly predicts the dependence of $\Omega_{\rm r}^{\N\E}$ on the laser propagation direction $\hat{k}\cdot\hat{z}$. However, the factors $a_{\rm M1}$ and $a_{\rm E2}$, which correspond to the ratio of M1 and E2 amplitudes to the E1 amplitude, must be real for a plane wave, so $\text{Im}\left[(a_{M1}+a_{E2})\right]=0$. Hence this model fails to explain this Rabi frequency correlation unless there is some additional effect that introduces a phase shift between the E1 and M1 amplitudes. For example, interference between the E1 amplitude due to the incident laser beam, and a phase shifted M1 amplitude due to a (low intensity) reflected beam can lead to a nonzero $\Omega_{\rm r}^{\N\E}$ by this model. However, this phase factor oscillates spatially on the scale of the light wavelength, which is very small compared to the size of the molecule cloud and hence should average out over the entire molecular beam cloud. The origin of the intrinsic $\Omega_{\rm r}^{\N\E}$ is still not fully understood, and we are continuing to explore models to understand this effect. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{P_NE.pdf} \caption{$\Phi^{\N\E\B}$ as a function of applied $\Nsw\Esw$-correlated laser power, $P^{\N\E}$, for both directions of laser pointing, $\hat{k}\cdot\hat{z}$. The artificial $\Omega_{\rm r}^{\N\E}$ resulting from correlated power $P^{\N\E}$ systematically shifts $\omega^{\N\E\B}$ in accordance with equation~\ref{eq:Empirical_AC_Stark_Shift_Phase_Result}. $\Phi^{\N\E\B}$ is zero when the applied $P^{\N\E}$ is such that there is no net $\Nsw\Esw$-correlated Rabi frequency. The intrinsic $\Omega_{\rm r}^{\N\E}$ (i.e. that inferred when $P^{\N\E}=0$) changed sign with $\hat{k}\cdot\hat{z}$ within the resolution of the measurement. The slopes between the two measurements differ due to differences in the AC Stark shift phase, believed to be due to differences in the spatial intensity profile and polarisation structure between the two measurements.} \label{fig:Omega_NE} \end{figure} Given the empirical AC Stark shift phase model in equation~\ref{eq:Empirical_AC_Stark_Shift_Phase_Result}, the resulting systematic errors in the frequency measurement are given by \begin{align} \omega^{\N\E}_{\E^{\rm{nr}}} & =\frac{1}{\tau}\sum_{i\in\left\{ {\rm prep},X,Y\right\} }\alpha_{\Delta,i}^{\rm{nr}}D_1\E^{\rm{nr}}(x_{i})\\ \omega^{\N\E}_{\Omega_{\rm r}^{\N\E}} & =\frac{1}{\tau}\sum_{i\in\left\{ {\rm prep},X,Y\right\} }\beta_{d\Omega_{\rm r},i}^{\rm{nr}}(\Omega_{\rm r}^{\N\E}/\Omega_{\rm r}^{\rm{nr}}). \end{align} Early in the experiment, we observed a nonzero systematic shift $\omega^{\N\E}_{\E^{\rm{nr}}}$ and took the steps outlined in section~\ref{sssec:suppression_of_the_AC_stark_shift_phases} to suppress it. To verify that the steps taken were effective, we examined $\omega^{\mathcal{NE}}$ as a function of an intentionally applied non-reversing electric field. The resulting data are shown in figure~\ref{fig:Enr_slope}. The original slope, $\partial\omega^{\N\E}/\partial\E^{\rm{nr}}=(6.7\pm0.4)(\mathrm{rad/s})/(\mbox{V/cm})$, corresponded to a systematic shift of $\omega^{\N\E}_{\E^{\rm{nr}}}\approx-34~\mathrm{mrad}/\mathrm{s}$ when combined with the measured $\E^{\rm{nr}}\approx-5^{\:}\mathrm{mV/\mathrm{cm}}$. Following the modifications described above, the $\partial\omega^{\N\E}/\partial\E^{\rm{nr}}$ slope was greatly suppressed, reducing the systematic error to $\omega^{\N\E}_{\E^{\rm{nr}}}<1~\mathrm{mrad}/\mathrm{s}$, well below the statistical uncertainty in the measurement of $\omega^{\mathcal{NE}}$. \begin{figure}[!ht] \centering \includegraphics[width=12cm]{Enr_slope.pdf} \caption{Linear dependence of the $\omega^{\mathcal{NE}}$ channel on an applied non-reversing electric field observed before (red) and after (black) we suppressed the known AC Stark shift phase by optimizing the preparation laser beam shape, time-averaged power and polarisation.} \label{fig:Enr_slope} \end{figure} Because we observed that the parameters $\E^{\rm{nr}}$ and $\Omega_{\rm r}^{\N\E}$ caused systematic errors in $\omega^{\mathcal{NE}}$, we intermittently measured the size of the associated systematic errors throughout the datasets that were used for our reported result. We measured $\partial\omega^{\N\E}/\partial\E^{\rm{nr}}$ by applying a range of large non-reversing electric fields, up to around 70 times that present under normal running conditions. The value of $\partial\omega^{\N\E}/\partial\Omega_{\rm r}^{\N\E}$ was measured by applying a correlated laser power $P^{\N\E}$ in the state preparation and state readout beams with a magnitude corresponding to an applied $\Omega_{\rm r}^{\N\E}$ that was up to 20 times that measured under normal operating conditions. These parameters were measured for multiple values of the magnetic field magnitude, $\B_{z}$, for which different state readout laser beam polarisations were required. Due to known birefringent behavior of the AC stark shift phases, we allowed for this possibility for all AC stark shift phase systematic errors. We measured $\partial\omega^{\N\E}/\partial\E^{\rm{nr}}$ for both $\hat{k}\cdot\hat{z}=\pm1$, but the $\Omega_{\rm r}^{\N\E}$ systematic error was only discovered after the $\hat{k}\cdot\hat{z}=+1$ dataset and hence $\partial\omega^{\N\E}/\partial\Omega_{\rm r}^{\N\E}$ was only monitored during the $\hat{k}\cdot\hat{z}=-1$ dataset. The $\Omega_{\rm r}^{\N\E}$ systematic error during the $\hat{k}\cdot\hat{z}=+1$ dataset was determined from auxiliary measurements of the AC Stark shift phase. As described in section \ref{ssec:efields}, $\E^{\rm{nr}}(x)$ exhibits significant spatial variation along the beam-line axis, $x$. However the $\E^{\rm{nr}}$ that was intentionally applied to determine $\partial\omega^{\N\E}/\partial\E^{\rm{nr}}$ was spatially uniform, and hence these measurements were insensitive to the difference $(\E^{\rm{nr}}(x_{{\rm prep}})-\E^{\rm{nr}}(x_{\rm{read}}))$ between the state preparation laser beam at $x_{{\rm prep}}$ and the state readout beam at $x_{\rm{read}}$. For this reason, we deduced the systematic error proportional to the difference $(\E^{\rm{nr}}(x_{{\rm prep}})-\E^{\rm{nr}}(x_{\rm{read}}))$ from auxiliary measurements of the AC Stark shift phase parameters, $\alpha_{\Delta,i}^{\rm{nr}}$. In summary, the systematic errors proportional to $\E^{\rm{nr}}$ and $\Omega_{\rm r}^{\N\E}$ that were evaluated and subtracted from $\omega^{\N\E}$ to report a measured value of $\wNEt$ can be expressed as \begin{align} \omega_{\E^{\rm{nr}}}^{\N\E}= & \left(\frac{\partial\omega^{\N\E}}{\partial\E^{\rm{nr}}}\right)\frac{1}{2}(\E^{\rm{nr}}(x_{{\rm prep}})+\E^{\rm{nr}}(x_{\rm{read}}))\nonumber\\ &+\frac{1}{\tau}(\alpha_{\Delta,{\rm prep}}^{\rm{nr}}-\alpha_{\Delta,X}^{\rm{nr}}-\alpha_{\Delta,Y}^{\rm{nr}})\frac{1}{2}(\E^{\rm{nr}}(x_{{\rm prep}})-\E^{\rm{nr}}(x_{\rm{read}}))\label{eq:Enr_systematic_error_value}\\ \omega_{\Omega_{\rm r}^{\N\E}}^{\N\E}= & \begin{cases} \frac{1}{\tau}\sum_{i\in\left\{ {\rm prep},X,Y\right\} }\beta_{d\Omega_{\rm r},i}^{\rm{nr}}\left(\frac{\Omega_{\rm r}^{\N\E}}{\Omega_{\rm r}^{\rm{nr}}}\right) & (\hat{k}\cdot\hat{z})=+1\\ \left(\frac{\partial\omega^{\N\E}}{\partial\Omega_{\rm r}^{\N\E}}\right)\Omega_{\rm r}^{\N\E} & (\hat{k}\cdot\hat{z})=-1 \end{cases} \end{align} where $(\partial\omega^{\N\E}/\partial\E^{\rm{nr}})$ and $(\partial\omega^{\N\E}/\partial\Omega_{\rm r}^{\N\E})$ were monitored by \emph{Intentional Parameter Variations} (see section~\ref{sec:Measurement_scheme_more_detail}) throughout the dataset used for our reported result, and $\E^{\rm{nr}}(x_{{\rm prep}})$, $\E^{\rm{nr}}(x_{\rm{read}})$, $\Omega_{\rm r}^{\N\E}$, $\alpha_{\Delta,i}^{\rm{nr}}$, and $\beta_{d\Omega_{\rm r},i}^{\rm{nr}}$ were obtained from auxiliary measurements. These two systematic errors account for almost all of the systematic offset that was subtracted from $\omega^{\mathcal{NE}}$ to obtain $\wNEt$ as described in section~\ref{ssec:total_systematic_error_budget}. \subsection{\texorpdfstring{$\A^{\N\E}$}{ANE} asymmetry effects} \label{ssec:asymmetry_effects} In addition to the dependence of the measured phase on laser detuning and Rabi frequency, we observed dependence of the asymmetry $\A$ (as defined in section~\ref{sec:signal_asymmetry}) on the laser parameters $\Delta_{\rm{read}}$ and $\Omega_{{\rm r},\rm{read}}$, due to differences between the properties of the $X$ and $Y$ readout laser beams. The laser-induced fluorescence signal $F(\Delta,\Omega_{\rm r})$ varies quadratically with detuning (for small detuning) and linearly with Rabi frequency. Under normal conditions, the signal sizes from $X$ and $Y$ are comparable, $F_X \approx F_Y \approx F$. If the $X$ and $Y$ beams have different wavevectors, $\vec{k}_{X,Y} = \vec{k}^{\mathrm{nr}} \pm \vec{k}^{XY}$, and $\vec{k}^{XY}$ has some component along $\hat{x}$, then the two beams will acquire different Doppler shifts. This leads to a linear dependence of the asymmetry on detuning, which in turn can couple to $\Delta^{\N\E}$ to result in a contribution to $\A^{\N\E}$, \begin{equation} \A^{\N\E} \approx\frac{1}{F} \frac{\partial^2 F}{\partial \Delta_{\rm{read}}^2} (\vec{k}^{XY}\cdot\langle\vec{v}\rangle)\Delta^{\N\E}. \end{equation} Similarly, if the two readout beams differ in Rabi frequency, $\Omega_{r, X/Y} \approx \Omega_{\rm r}^{\mathrm{nr}} \pm \Omega_{\rm r}^{XY}$, the asymmetry becomes linearly dependent on Rabi frequency, which in turn can couple to $\Omega_{\rm r}^{\N\E}$ to result in a contribution to $\A^{\N\E}$, \begin{equation} \A^{\N\E} \approx -\left(\frac{1}{F} \frac{\partial F}{\partial \Omega_{\rm r}}\right)^2 \Omega_{\rm r}^{XY} \Omega_{\rm r}^{\N\E}. \end{equation} However, these asymmetry effects are very distinguishable from spin precession phases and polarisation misalignments. Since the $\Psw$ and $\Rsw$ switches effectively swap the role of the $X$ and $Y$ readout beams, the $\A^{\N\E}$ effects described above do not contribute to $\omega^{\mathcal{NE}}$ when summed over these switches. Additionally, asymmetry effects, once converted to an equivalent frequency or phase, depend on the sign of the contrast, $\C$, unlike true phases. In the $\B_z\approx 20$~mG configuration, ${\rm sgn}(\C)={\rm sgn}(\B_z)$, but ${\rm sgn}(\C)$ has no dependence on ${\rm sgn}(\B_z)$ for $\B_z\approx 1,~40$~mG. Hence asymmetry correlations $\A^{\N\E}$ are mapped onto frequency correlations $\omega^{\N\E\P\R}$ or $\omega^{\N\E\B\P\R}$ depending on the magnetic field magnitude. If the pointing or Rabi frequency differences between the $X$ and $Y$ beams drift on timescales comparable to or shorter than the $\Psw$ or $\Rsw$ switches, these effects can occasionally `leak' into the `adjacent' channels $\omega^{\N\E\P}$, $\omega^{\N\E\R}$, $\omega^{\N\E\B\P}$, $\omega^{\N\E\B\R}$; however, we have not seen any evidence of these effects contributing to the $\omega^{\N\E}$ channel itself, and hence did not include systematic error contributions due to these effects in our systematic error budget. \subsection{\texorpdfstring{$\Esw$}{E}-Correlated Phase} \label{ssec:E_correlated_phase} Previous eEDM measurements have often been limited by a variety of systematic errors that would have produced an $\Esw$-correlated phase precession frequency in our experiment, $\omega^{\E}$ \cite{Khriplovich1997,Murthy1989,Regan2002,Eckel2013}, such as $\Esw$-correlated leakage currents, geometric phases, and motional magnetic fields. Our ability to spectroscopically reverse the molecular orientation through a choice of $\Nsw$ distinguished these effects from an eEDM-generated phase. In addition, the aforementioned effects scale with the magnitude of the applied electric field, which was orders of magnitude smaller in our experiment than previous similar eEDM experiments due to the high polarisability of ThO \cite{Regan2002}. Also, because the molecular polarisation was saturated, the eEDM phase should have been independent of the magnitude of the applied field. We also note that any shifts from leakage currents and motional magnetic fields coupled through the magnetic dipole moment, which is near-zero in the $H$-state of ThO. Thus we expected $\omega^{\E}$ to be substantially suppressed, and that it should not enter $\omega^{\N\E}$ at any significant level. The reversal of $\Nsw$ did not, however, entirely eliminate an eEDM-like phase due to $\omega^{\E}$. As discussed in section~\ref{sec:compute_phase}, there was a small and $\E$-field dependent difference between the $g$-factors of the two $\Nsw$ levels \cite{Bickman2009,Petrov2014}, which meant that a systematic error in the $\omega^\E$ channel showing up in $\omega^{\N\E}$ at a level given by $\omega^{\N\E}_{\omega^\E}=(\eta\E/g_1)\omega^{\E}$. We verified this relation by intentionally correlating a 1.4~mG component of our applied magnetic field with $\Esw$. This deliberate $\B^{\E}$ resulted in a large shift in the value of $\omega^\E$ and a $\sim$1000-times smaller offset of $\omega^{\mathcal{NE}}$, as illustrated in figure~\ref{fig:phi_E}. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{B_E_systematic.pdf} \caption{Illustration of the $\sim$1000-fold suppression of systematic errors associated with $\omega^{\E}$ provided by the $\Nsw$ switch. Large values of $\omega^{\E}$ occur when there is a component of $\B_z$ correlated with $\Esw$, $\B^\E$. In previous eEDM experiments, this would have corresponded to a systematic error. In our experiment a much smaller shift in $\omega^{\mathcal{NE}}$ results from the small difference in magnetic moments between the two $\Nsw$ levels. Error bars for the $\omega^{\E}$ data are significantly smaller than the data points. Data were taken with $\E=142~\mathrm{V}/\mathrm{cm}$ and the measured ratio of the slopes, $(\partial\omega^{\mathcal{NE}}/\partial\B^\E)/(\partial\omega^{\E}/\partial\B^\E)=(2.8\pm0.8)\times 10^{-3}$ is consistent with the expected value $\eta \E/g_1=(2.5\pm0.1)\times 10^{-3}$.} \label{fig:phi_E} \end{figure} The intentionally applied $\B^\E$ was the only experimental parameter that was observed to produce a measurable shift in $\omega^{\E}$. Even large ($\sim$20~mG) magnetic fields components along $\hat{x}$ and $\hat{y}$, which exaggerate the effect of motional magnetic fields, did not shift $\omega^{\E}$ (this is expected, since the large tensor Stark shift in $\ket{H,J=1}$ dramatically suppresses the effect of motional magnetic fields \cite{Player1970}). For our eEDM data set, $\omega^{\E}$ was consistent with zero. We included a contribution from $\omega^{\E}$ in our error budget for $\omega^{\mathcal{NE}}$ by multiplying the mean and uncertainty of the extracted $\omega^{\E}$ by our measured $|\E|$-dependent suppression factors $\eta\E/g_1$. \subsection{\texorpdfstring{$\Nsw$}{N}-Correlated Laser Pointing} \label{ssec:N_correlated_pointing} We discovered a nonzero, time-dependent signal in $\omega^{\N}$ which was associated with an $\Nsw$-correlated laser pointing, $\hat{k}^{\N}\approx5~\upmu$rad. An investigation into the mechanism behind this effect was inconclusive. We found that the pointing correlation appeared downstream of the AOMs that created the rapid polarisation switching and improved alignment was able to reduce the effect. We also found that the observed pointing was in some way correlated with the seed power and input angle of incidence into the high-power fiber amplifier immediately upstream of the polarisation switching, despite the fact that pointing out of the amplifier did not fluctuate. Since we used four different sets of AOMs to perform the $\Nsw$ and $\Psw$ switches before the amplifier, we observed laser pointing correlated with both of these switches. By matching the characteristics of these four beam paths we were able to suppress $\hat{k}^{\N}$ to $<1~\upmu$rad. The effect of $\hat{k}^{\N}$ on $\omega^{\N}$ was studied by exaggerating the former with piezoelectrically actuated mirrors. Examining $\partial\omega^{\N}/\partial\hat{k}^{\N}$ showed significant fluctuations in its value. We were unable to identify the mechanism by which $\hat{k}^\N$ affected $\omega^{\N}$. We had no evidence that the effect causing the observed variation in $\omega^{\N}$ also caused a systematic error in $\omega^{\mathcal{NE}}$, but to be cautious we included an associated systematic uncertainty in our systematic error budget (section \ref{ssec:total_systematic_error_budget}). Assuming a linear relationship between $\omega^{\N\E}$ and $\omega^{\N}$, we extracted $\partial\omega^{\N\E}/\partial\omega^{N}$ from a combination of data taken under normal conditions and with an exaggerated $\omega^{\N}$ induced by an exaggerated $\hat{k}^{\N}$. We then placed an upper limit on a possible systematic error $\omega^{\mathcal{NE}}_{\omega^{\N}}$ based on the value of $\omega^{\N}$ obtained under normal running conditions. The resulting systematic uncertainty was four times smaller than our statistical uncertainty. \subsection{Laser Imperfections} \label{ssec:laser_imperfections} Of the lasers used in our experiment, only the state preparation and readout lasers were known to produce possible systematic errors; imperfections in the rotational cooling, optical pumping or target ablation lasers simply resulted in a reduction in usable molecule flux. As part of our search for systematic errors, we intentionally exaggerated all known state preparation and readout laser imperfections possible without dismantling the apparatus (cf.\ table~\ref{tbl:syst_check}). In this section we describe this procedure and the resulting contributions to our systematic error budget. \subsubsection{Laser Detuning} \label{sssec:laser_detuning} \hspace*{\fill} \\ The correlated components of the state preparation and readout laser beam detunings are described in detail in section~\ref{sec:state_prep_read}. Each detuning component was separately exaggerated and in some cases multiple components were simultaneously exaggerated. Most of the detuning terms in equation~\ref{eq:detuningcorrelations} were exaggerated to $\pm2\pi\times1$--2~MHz. No detuning or detuning correlation produced a significant shift in $\omega^{\mathcal{NE}}$ other than $\Delta^{\N\E}$ caused by $\E^{\rm{nr}}$, discussed in section~\ref{sssec:correlated_laser_parameters}. In some cases, shifts in other phase channels were induced, but all shifts were consistent with well-understood AC Stark shift and asymmetry models described in sections \ref{sssec:AC_stark_shift_phases} and \ref{ssec:asymmetry_effects}. For example, the combination of nonzero $\Delta^{\N}$ and $\Delta^{\rm{nr}}$ coupled to the $\B$-dependent component of the AC stark shift phase (equation~\ref{eq:bdOmega}) induces a significant shift in $\omega^{\N\B}$ (cf.\ equation~\ref{eq:Empirical_AC_Stark_Shift_Phase_Result}). Asymmetry correlations also resulted from these detuning correlations, but these were only manifested in channels odd with respect to $\Psw$ and $\Rsw$, and hence had no plausible effect on $\omega^{\mathcal{NE}}$. Because the YbF eEDM experiment \cite{Kara2012} observed unexplained dependence of the measured eEDM value on state preparation microwave detuning, we included a systematic error contribution from all detuning imperfections in our systematic error budget. \subsubsection{Laser Pointing and Intensity} \label{sssec:laser_pointing_and_intensity} \hspace*{\fill} \\ Similar to detuning imperfections, the state preparation and readout lasers could have imperfect pointing and correlated intensities. Ideally the laser propagation direction, $\hat{k}$, would have been parallel to the laboratory electric field. This would have diminished the amount of $\hat{z}$ polarised light experienced by the molecules, which could drive unwanted off-resonant transitions, and prevented stray retroflection from the ITO field plate surfaces. Using this ITO retroflection as a guide, we aligned $\hat{k}$ perpendicular to the field plate surface, and therefore parallel to $\hat{\E}$, to within $\sim{3}$~mrad. To test for errors related to imperfect pointing, both the state preparation and readout pointing misalignments were exaggerated in the $x$-direction to $\pm$10~mrad, as was the relative pointing of the $\hat{X}$ and $\hat{Y}$ state readout beams. The vacuum windows and $\sim$3.8~cm wide holes in the magnetic shields prevented us from further misaligning the beams. To decouple pointing imperfections from detuning imperfections, the state preparation and readout laser frequencies were tuned to resonance after each pointing adjustment. No shift in $\omega^{\mathcal{NE}}$ was observed and no systematic error contribution from pointing imperfections was included. Pointing imperfections were only observed to affect the signal asymmetry, as previously discussed in section \ref{ssec:asymmetry_effects}. Unlike laser pointing and detuning, there was no `ideal' value for laser intensity. The state preparation and readout laser intensities were chosen such that we were driving optical pumping to completion on the $H\rightarrow C$ transition without producing unnecessary thermal stress on the field plates. We decreased each laser intensity by a factor of four to check that there was no variation in $\omega^{\mathcal{NE}}$. We observed a nonzero $\Omega_{\rm r}^{\N}$ caused by the $\Nsw$-correlated seed power into the high-power fiber amplifiers and by Stark mixing between rotational levels in $H$ and $C$ as discussed in section \ref{sssec:correlated_laser_parameters}. We exaggerated this imperfection by a factor of 20. Only $\omega^{\N\B}$ was shifted, consistent with our understanding of the $\B$-correlated AC Stark shift phase. These intensity systematic error checks were not included in the systematic error budget. \subsection{Magnetic Field Imperfections} \label{ssec:magnetic_field_imperfections} The $H$ state is very insensitive to a magnetic field $\B_z$ due to its small $g$-factor, as discussed in section~\ref{sec:tho_molecule}. Sensitivity to the transverse fields is even further suppressed by the large size of the tensor Stark shift relative to the Zeeman interaction. Nevertheless, there are known mechanisms by which magnetic field imperfections can contribute to systematic errors: $\B_z^{\rm{nr}}$ can contribute to the $\omega^{\mathcal{NE}}_{\Omega_{\rm r}^{\N\E}}$ systematic error discussed in section~\ref{sssec:correlated_laser_parameters}, and transverse fields $\B_x^{\rm{nr}}$ and $\B_y^{\rm{nr}}$ can lead to the geometric phase systematic errors \cite{Vutha2010} discussed in section~\ref{ssec:E_correlated_phase}. We designed the experiment to allow a wide variety of magnetic field tilts and gradients to be applied as described in section~\ref{sec:bfields} and we directly looked for systematic errors resulting from these magnetic field imperfections. Both $\B$-correlated and uncorrelated imperfections were applied. We did not precisely measure the residual values of each of these parameters along the molecule beam line until we had studied all systematic errors and collected our published data set. Based on the projected ${\sim}10^5$ magnetic shielding factor, we expected all stray magnetic fields and gradients to be on the order of 10~$\upmu$G and 1 $\upmu$G/cm, respectively. For this reason we only exaggerated these imperfections to $\sim$2~mG and $\sim$0.5~mG/cm. When we mapped out the magnetic field with a magnetometer inserted between the electric field plates as described in section \ref{sec:bfields}, we discovered that several imperfections were much larger than we expected (e.g.\ $\B_y \approx 0.5$~mG). This was caused by poor magnetic shielding due to insufficient shield degaussing. For this reason we gathered additional eEDM data with some magnetic field parameters exaggerated by an additional factor of five. $\omega^{\mathcal{NE}}$ and nearly all other frequency channels, apart from $\omega^{\rm{nr}}$ and $\omega^{\B}$ were not observed to be affected by any of these magnetic field parameters. Because uncorrelated stray magnetic fields and magnetic field gradients caused unexpected eEDM offsets in the PbO eEDM experiment \cite{Eckel2013}, we included contributions from all uncorrelated magnetic field imperfections in our systematic error budget described in section \ref{ssec:total_systematic_error_budget}. \subsection{Electric Field Imperfections} \label{ssec:electric_field_imperfections} Unlike the magnetic field, we do not have the ability to control electric field gradients and stray electric fields, aside from the average value of $\E^{\rm{nr}}$. The field plates were located at the center of the experiment, inside the vacuum chamber and magnetic shields and coils, with no direct access available. To search for systematic errors related to the electric field, equal amounts of eEDM data were gathered with two different electric field magnitudes. The $\omega^{\mathcal{NE}}$ values from both field magnitudes were consistent with each other. The YbF eEDM experiment observed unexplained eEDM dependence on the voltage offset common to both field plates. For this reason we exaggerated this offset by a factor of 1000 (relative to its residual value of ${\approx}5$~mV) and, even though it did not shift our eEDM measurement, included it in our systematic error budget. \subsubsection{Molecule Beam} \label{ssec:molecular_beam} \hspace*{\fill} \\ The molecule beam should have ideally travelled parallel to the electric field plates and well-centred between the plates. This minimizes Doppler shifts, protects the plates from being coated with ThO, and ensures that the molecules experience the most uniform electric field. The entire beam source vacuum chamber sat on a two axis ($yz$) translation stage. The exit aperture of the buffer gas cell was aligned to within 1~mm of the centre of the fixed collimators and electric field plates, using a theodolite. Geometric constraints only allowed us to exaggerate the cell misalignment by roughly a factor of three (up to 3~mm) before the molecules would have hit the sides of the field plates. We also varied the transverse spatial and velocity distributions by using adjustable collimators between the beam source and spin-precession region to block half of the beam from the $\pm\hat{x},\pm\hat{z}$ directions. The value of $\omega^{\mathcal{NE}}$ was not observed to shift with any molecule beam parameter adjustment. \subsection{Searching for Correlations in the eEDM Data Set} \label{ssec:correlations_in_the_eEDM_data_set} In addition to performing systematic error checks for possible variations of $\omega^{\N\E}$ with various experimental parameters, we searched for statistically nonzero values within the set of 1536 possible correlations with the block and superblock switches. This analysis was performed for our primary measured quantities $\omega$, $\C$, and $\F$ and for a wide range of auxiliary measurements such as laser powers, magnetic field, room temperature, etc. We also examined the switch-parity channels of $\omega$, $\C$, and $\F$ as a function of time within the molecule beam pulse, and as a function of time within the polarisation switching cycle. We used the Pearson correlation coefficient to look for correlations between the aforementioned switch-parity channels and used the autocorrelation function to look for signs of time variation of the mean within those channels. Figure~\ref{fig:pixel_plot} illustrates data from such a search with a subset of the previously described quantities. In this search, we looked at 4390 quantities and we set the significance threshold at $4\sigma$ which correponds to a probability of $p\approx0.25$ that there will be one or more false positives above that threshold. We represented the significance of each of these quantities with a grayscale pixel. Each pixel that was significant at the $4\sigma$ level is marked with a symbol corresponding to a known explanatory physical model, or a red dot if the signal is not yet explained. The fact that we understand most of the significant signals present in our experiment, combined with the fact that the statistical distribution of the remaining signals below the significance threshold is consistent with a normal distribution, gives us added confidence in our models of the experiment and our reported eEDM result. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{pixel_plot_new_jpeg.pdf} \caption{Over 4,000 switch-parity channels (left) and correlations between switch parity channels (upper right) computed from the eEDM data set. The deviation of each quantity from zero in units of the statistical uncertainty is indicated by the grayscale shading. We set a significance threshold of $4\sigma$ above which there is a probability of $p=25\%$ of finding at least 1 false positive. We mark each significant channel/correlation with a symbol corresponding to a model known to produce a signal in that channel. The quantities below this threshold exhibit a normal distribution, shown in the lower right. } \label{fig:pixel_plot} \end{figure} Channels/correlations marked with symbols are significantly nonzero due to known mechanisms as follows: \begin{itemize} \item Green stars: Correlations due to the nonzero and drifting signal in the $\omega^{\N}$ channel described in section~\ref{ssec:N_correlated_pointing}. \item Light blue squares: Signals in $\omega^{\N\E\B}$ channels due to the $\Bsw$-odd AC stark shift phase coupling to $\Omega_{\rm r}^{\N\E}$ as described in section~\ref{sssec:correlated_laser_parameters}. \item Orange triangles: Correlations due to contrast or asymmetry coupling to $\Omega_{\rm r}^{\N\E}$. Contrast correlations arise simply because there is a linear dependence of total contrast on Rabi frequency, and the asymmetry correlation is described in section~\ref{ssec:asymmetry_effects}. \item Brown diamond: Correlations in $\C^{\N}$ and related contrast channels due to nonzero Rabi frequency correlations $\Omega_{\rm r}^{\N}$ and $\Omega_{\rm r}^{\N\P}$. These arise due to laser power correlations with the $\Nsw$ and $\Psw$ switches and due to Stark mixing between rotational levels in $H$ and $C$, which create $\Nsw$- and $\Psw$-correlated transition amplitudes on the $H\rightarrow C$ transition as described in section~\ref{sssec:laser_pointing_and_intensity}. \item Red dot: Signals above our significance threshold for which we have been unable to find a plausible explanation. Even if these quantities arise from real physical effects, they would need to couple to other correlated quantities to contribute to $\omega^{\N\E}$ and there is no evidence for this in the eEDM dataset. \end{itemize} \subsection{Systematic Error Budget} \label{ssec:total_systematic_error_budget} The method used for construction of a systematic uncertainty varies from experiment to experiment (see for example \cite{sinervo2003,barlow2002}), and it is ultimately a subjective quantity. Even if individual contributions are derived from objective measurements, their inclusion or exclusion in the systematic uncertainty is subjective. Furthermore, the systematic uncertainty cannot possibly be a measure of the uncertainty in all systematic errors in the experiment, but rather only those which were identified and searched for. Although we work hard to identify all significant systematic errors in the measurement, we cannot rule out the possibility that some were missed. Our criteria for including a given quantity in the systematic uncertainty consist of three classes of systematic errors in order of decreasing importance of inclusion: \begin{enumerate}[(A)] \item If we measured a nonzero correlation between $\omega^{\mathcal{NE}}$ and some parameter which had an ideal value in the experiment, we performed auxiliary measurements to evaluate the corresponding systematic error and subtract that error from $\omega^{\mathcal{NE}}$ to obtain $\wNEt$. The statistical uncertainty in the shift made to $\omega^{\mathcal{NE}}$ contributed to the systematic uncertainty. \item If we observed a signal in a channel that we deemed important to understand, and it was not understood, but was not observed to be correlated with $\omega^{\mathcal{NE}}$, we set an upper limit on the shift in $\omega^{\mathcal{NE}}$ due to a possible correlation between the two channels. Since such a signal represented a gap in our understanding of the experiment, we added this upper limit as a contribution to the systematic uncertainty. \item If a similar experiment saw a nonzero, not understood correlation between their measurement channel and some parameter with an ideal experimental value, but we did not observe an analogous correlation, we set an upper limit on the shift in $\omega^{\mathcal{NE}}$ due to this imperfection. Since this signal may have signified a gap in our understanding of our experiment, we added this upper limit as a contribution to the systematic uncertainty. \end{enumerate} \begin{table}[tbp] \centering \caption{Systematic error shifts and uncertainties for $\omega^{\mathcal{NE}}$, in units of mrad/s grouped by inclusion class (defined in the text). Total uncertainties are calculated by summing the individual contributions in quadrature. Note that $\omega^{\mathcal{NE}}\approx1.3$~mrad/s corresponds roughly to $1\times10^{-29}~\ecm$ for our experiment.} \begin{tabular}{llcc} \br Class & Parameter & Shift (mrad/s) & Uncertainty (mrad/s)\\ \mr A & $\E^{\rm{nr}}$ correction & $-0.81$ & $0.66$\\ A & $\Omega_{\rm r}^{\N\E}$ correction & $-0.03$ & $1.58$\\ A & $\omega^{\E}$ correlated effects & $-0.01$ & $0.01$\\ B & $\omega^{\N}$ correlation & & $1.25$\\ C & Non-reversing $\B$-field $\left(\B_{z}^{\rm{nr}}\right)$ & & $0.86$\\ C & Transverse $\B$-fields $\left(\B_{x}^{\rm{nr}},\B_{y}^{\rm{nr}}\right)$ & & $0.85$\\ C & $\B$-field gradients & & $1.24$\\ C & Prep./readout laser detunings & & $1.31$\\ C & $\Nsw$ correlated detuning & & $0.90$\\ C & $\E$-field ground offset & & $0.16$\\ \mr & Total Systematic & $-0.85$ & $3.24$\\ \mr & Statistical Uncertainty & & $4.80$\\ \mr & Total Uncertainty & & $5.79$\\ \br \end{tabular} \label{tbl:syst_error} \end{table} Table \ref{tbl:syst_error} contains a list of the contributions to our systematic error, grouped by inclusion class, with the corresponding shifts and/or uncertainties. Accounting for class A systematic errors was obligatory, and the removal of these errors from $\omega^{\mathcal{NE}}$ can be viewed as a redefinition of the measurement channel to $\wNEt$ which does not contain those unwanted effects. These systematic errors consisted of those that depended on the parameters $\E^{\rm{nr}}$, $\Omega_{\rm r}^{\N\E}$, and $\omega^{\E}$ as described in sections \ref{sssec:correlated_laser_parameters} and \ref{ssec:E_correlated_phase}, and as such our reported measurement of the $T$-odd spin precession frequency is defined as $\wNEt=\omega^{\mathcal{NE}}-\omega^{\mathcal{NE}}_{\E^{\rm{nr}}}-\omega^{\mathcal{NE}}_{\Omega_{\rm r}^{\N\E}}-\omega^{\mathcal{NE}}_{\omega^\E}$. The class B and class C systematic errors were included in the systematic uncertainty to lend credance to our result despite unexplained signals and unexplained systematic errors in experiments similar to ours. All uncertainties in the contributions to the systematic error were added in quadrature to obtain the systematic uncertainty. With reference to the class B criterion, we deemed the following channels as important to understand: $\omega^{\N},$ $\omega^{\E},$ $\omega^{\E\B}$, and $\omega^{\N\E\B}$. Signals were initially not expected in any of these channels and could be measured with the same precision as $\omega^{\mathcal{NE}}$. The $\omega^{\rm{nr}}$, $\omega^{\B}$ and $\omega^{\N\B}$ channels were not included in our systematic error since the Zeeman spin precession signals present in these channels had non-stationary means and additional noise due to drift in the molecule beam velocity. Only one of these channels, $\omega^{\N}$, described in section \ref{ssec:N_correlated_pointing}, met the class B inclusion criterion. With reference to the class C criterion, we defined the set of experiments similar to ours to include other eEDM experiments performed in molecules: the YbF experiment \cite{Hudson2011} and the PbO experiment \cite{Eckel2013}. The PbO experiment observed unexplained systematic errors coupling to stray magnetic fields and magnetic field gradients (cf.\ section~\ref{ssec:magnetic_field_imperfections}), and the YbF experiment observed unexplained systematic errors proportional to detunings (cf.\ section~\ref{sssec:laser_detuning}) and a field plate ground voltage offset (cf.\ section~\ref{ssec:electric_field_imperfections}). Thus we included the systematic uncertainty associated with the aforementioned effects in our budget. After having accounted for the systematic errors and systematic uncertainty, we reported $\wNEt$, the contribution to the channel $\omega^{\mathcal{NE}}$ induced by $T$-odd interactions present in the $H$ state of ThO, as \begin{align} \label{eq:wNEt_num} \wNEt=&2.6 \pm 4.8_{\rm{stat}}\pm 3.2_{\rm{syst}}~\rm{mrad}/\rm{s}\\ =&2.6 \pm 5.8~\rm{mrad}/\rm{s}, \label{eq:wNEt_num_err_comb} \end{align} where the combined uncertainty is defined as the quadrature sum of the statistical and systematic uncertainties, $\sigma^2=\sigma_{\rm{stat}}^2+\sigma_{\rm{syst}}^2$. This result is consistent with zero within $1\sigma$. Since $\sigma_{\rm{syst}}$ is to some extent a subjective quantity, its inclusion should be borne in mind when interpreting confidence intervals based on $\sigma$. Nevertheless, this inclusion decision does not have a large impact on the meaning of the resulting confidence intervals since $\sigma$ is only about 20\% larger than $\sigma_{\rm{stat}}$. \subsubsection{Overview} \hspace*{\fill} \\ In this section we provide an overview of our experimental procedure and the important components of our apparatus. The reader should consult subsequent subsections for further details. A schematic of the experimental apparatus is shown in figure~\ref{fig:apparatus_overview}. \begin{figure}[!ht] \centering \includegraphics[scale=0.46]{apparatus_overview.pdf} \caption{A schematic of the overall ACME experimental apparatus. A beam of ThO molecules was produced by a cryogenic buffer-gas-cooled source. After exiting the source, the molecules were rotationally cooled via optical pumping and microwave mixing and then collimated before entering a magnetically shielded spin-precession region where nominally uniform magnetic and electric fields were applied. Using optical pumping, the molecules were transferred into the eEDM-sensitive $H$ state and then a spin superposition state was prepared. The spin precessed for a distance of ${\approx}22$~cm and was then read out via laser-induced fluorescence. The fluorescence photons were collected by lenses and passed out of the chamber for detection by photomultiplier tubes. See main text for further details.\label{fig:apparatus_overview}} \end{figure} ThO molecules were produced via pulsed laser ablation of a ThO$_2$ ceramic target. This took place in a cryogenic neon buffer gas cell, held at a temperature of ${\approx}16$~K, at a repetition rate of 50~Hz. The resulting molecular beam was collimated and had a forward velocity $v_{\parallel}\approx200$~m/s. In the state readout region the molecular pulses had a temporal (spatial) length of around 2~ms (40~cm). The buffer gas beam source is described in detail in section~\ref{sec:beamsource}. After leaving the buffer gas source, the molecules had a velocity distribution and rotational level populations consistent with a Maxwell-Boltzmann distribution at a temperature of ${\approx}4$~K. This was lower than the cell temperature due to expansion cooling, which enhanced the number of usable ThO molecules in the relevant rotational state. Further rotational cooling was provided via optical pumping and microwave mixing (see section~\ref{sec:rotcool}). The molecules then passed through adjustable horizontal and vertical collimators consisting of a double layer of razor blades affixed to linear translation vacuum feedthroughs. Under normal running conditions, these collimators were withdrawn so that they did not affect the profile of the molecule beam in the spin-precession region; however, they were used to modify the spatial profile of the molecule beam during systematic checks to investigate the effect of molecule beam position and pointing. Just before the field plates, 126~cm from the beam source, the molecules passed through a 1~cm square collimating aperture, which determined the beam profile in the spin-precession region and prevented particles in the beam from being deposited on the field plates. As described in section~\ref{sec:Measurement_scheme}, a spin precession measurement was performed where the precession angle provided a measure of the interaction energy of an eEDM with the effective electric field, $\mathcal{E}_{\rm eff}$, in the molecule. A pair of transparent, ITO-coated glass plates provided an electric field that polarised and aligned the molecules. Laser beams passed through these plates to perform state preparation and readout. Around the vacuum chamber were coils that provided a uniform magnetic field in the $+\hat{z}$ direction, and five layers of magnetic shielding which shielded against environmental magnetic fields. The electric and magnetic fields are discussed in detail in sections~\ref{ssec:efields} and \ref{sec:bfields}. The fluorescence induced by the state readout laser beam was collected by a set of eight lenses and transferred out of the spin-precession region using fiber bundles and light pipes (see section \ref{sec:fluorescence_collection}), where it was detected by photo-multiplier tubes\footnote{Hamamatsu R8900U-20.}. \subsubsection{Buffer Gas Beam Source} \hspace*{\fill} \\ \label{sec:beamsource} The basic operation of our beam source \cite{Maxwell2005,Petricka2007,Sushkov2008,Patterson2009,Campbell2009Review,Tu2009,Patterson2010,Hutzler2011,Barry2011,Lu2011,Skoff2011PRA,Skoff2011Thesis,Hutzler2012,HutzlerThesis,Hummon2013,Bulleid2013} is depicted in figure~\ref{fig:beam_source_schematic}. \begin{figure}[!ht] \centering \includegraphics[width=13.5cm]{beam_source_schematic.pdf} \caption{A schematic of the buffer gas beam source. Neon buffer gas flowed into a cell at a temperature of 16~K where it served to thermalise the hot ThO molecules produced by laser ablation. The ThO was entrained in the buffer gas flow. The mixture exited the cell and its expansion cooled the ThO to $\approx4$~K. The resulting beam passed through collimating apertures in the 4~K and 50~K radiation shields and exited the beam source into the high vacuum region of the experiment. Solid circles represent buffer gas atoms. Open circles represent ThO molecules being cooled (red to blue transition).} \label{fig:beam_source_schematic} \end{figure} Neon buffer gas was flowed at a rate of $\approx30$~SCCM (standard cubic centimetres per minute) through a copper cell held at a $T\approx16$~K. The inside of the cell was cylindrical with a diameter of 13~mm and a length of 75~mm. Within the cell ThO was introduced at high temperature via laser ablation: overlapped beams of light with wavelengths 532~nm and 1064~nm emitted by a pulsed Nd:YAG laser\footnote{Litron Nano TRL 80-200.} were focussed onto a 1.9~cm diameter ${\rm ThO}_2$ target fabricated from pressed and sintered powder \cite{Balakrishna1988,KiggansPrivate}. The laser pulses had a duration of a few ns, a pulse energy up to approximately 100~mJ and a repetition rate of $50$~Hz. The resulting hot plume of ejected particles, which contained ThO along with various other ablation byproducts, was cooled by collisions with the neon buffer gas, became entrained, and then exited the cell. The cell temperature was maintained by a combination of a pulse tube refrigerator\footnote{Cryomech PT415.} and a resistive heater. The cell was surrounded by a 4~K copper shield that protected the cell from black-body radiation and cryopumped most of the neon emerging from the cell. This shield was also partially covered with activated charcoal that acted as a cryopump for residual helium in the neon buffer gas. We observed a background pressure of $10^{-7}$~Torr without any mechanical pumping of the beam source when cold and with no buffer gas flow. The 4~K shield had a stainless steel conical collimator with a circular aperture of diameter 6~mm, located 25~mm from the cell aperture, by which distance the expanding beam was sufficiently diffuse that intra-beam collisions were negligible and most trajectories were ballistic. This collimator thus functioned as a differential pumping aperture without affecting the beam's cooling, acceleration or expansion \cite{Hutzler2011}. The collimator had a thermal standoff relative to the 4~K shield to which it was mounted so that it could be kept at a temperature above the freezing point of neon by a resistive heater to prevent ice buildup on the collimator adversely affecting the beam dynamics. Another layer of shielding surrounded the 4~K copper shield, constructed from aluminium and held at a temperature of 60~K. Both the 4~K and 60~K radiation shields were thermally connected to the pulse tube by heat links made of flexible copper rope. The aluminium vacuum chamber that housed the buffer gas beam source\footnote{Precision Cryogenic Systems Inc.} had windows on each side, providing optical access for both the ablation laser and spectroscopy lasers, the latter allowing characterisation and monitoring of beam properties. The ThO beam's forward velocity distribution was roughly Gaussian with mean $v_{\parallel}\approx200$~m/s and standard deviation $\sigma_{v_{\parallel}}\approx13$~m/s, corresponding to a temperature of ${\approx}5$~K. The rotational temperature was $T_{\rm rot}\approx4$~K (rotational constant $B_X\approx0.33$~cm$^{-1}$), meaning that ${\approx}90$\% of the population was contained in the levels $J=0$--$3$. Upon exiting the cell, the beam had a FWHM angular spread of $\approx45^{\circ}$. Several stages of collimation were applied before reaching the spin-precession region. The final collimator subtended a solid angle of $\approx6\times10^{-5}~{\rm sr}$, meaning 1 in ${\sim}20,000$ molecules exiting the cell reached the spin-precession region, where the precession measurement was performed (see figure~\ref{fig:apparatus_overview}). ThO yields from a given ablation spot decreased significantly after ${\sim}10^4--10^5$ YAG pulses (${\sim}10$~mins), at which time the laser spot was moved to an un-depleted region via a motorised mirror to re-optimise the beam flux. Each target was found to provide acceptable levels of molecule flux for around 300~hours of continuous running (${\approx}5\times10^7$ shots) before requiring replacement. \subsubsection{Rotational Cooling} \hspace*{\fill} \\ \label{sec:rotcool} We observed that ${\approx}2$~cm downstream of (further from) the buffer gas beam source cell aperture, $J$-changing collisions were `frozen out' \cite{Hutzler2011}, and the distribution of rotational state populations was fairly well described by a Boltzmann distribution with temperature $T_{\rm rot}\approx4$~K. At this temperature the resulting fractions of molecules in the $J=0$--3 levels were estimated to be 0.1, 0.3, 0.3 and 0.2 respectively. As described in section~\ref{sec:state_prep_read}, we sought to transfer as much of the initial ground state population as possible into $\ket{H,J=1}$ via optical pumping. To enhance the population which was transferred, we accumulated population in a single rotational level of the ground state before state preparation. The scheme used to achieve this, which we refer to as rotational cooling, is illustrated schematically in figure~\ref{fig:rotcool} and discussed in detail in \cite{SpaunThesis}. \begin{figure}[!ht] \centering \includegraphics[scale=0.55]{rotcool.pdf} \caption{Schematic of the rotational cooling process. Numbers label $J$ and $M_y$ (projection of total angular momentum along $y$) sublevels are unlabelled but are $-1$, 0, $+1$ from left to right. Population was first optically pumped out of the $J=2$ and $J=3$ levels ($C$-state $\Omega$-doublet structure and $M_y$ sublevels omitted for clarity) in a nominally field-free region. Next, population was equilibrated between $\ket{J=0}$ and $\ket{J=1,M_y=0}$ via microwave pumping. An electric field of ${\approx}40$~V/cm along $\hat{y}$ was empirically observed to lead to an increased population in $\ket{X,J=1}$. Grey dots represent population before these pumping processes. The schematic on the right represents the populations inside the spin-precession region (after pumping).\label{fig:rotcool}} \end{figure} The first stage of the process was the optical pumping of molecules out of $\ket{X,J=2}$ ($\ket{X,J=3}$), via $\ket{C,J^{\prime}=1}$ ($\ket{C,J^{\prime}=2}$) into $\ket{X,J=0}$ ($\ket{X,J=1}$) using laser light at 690~nm. The natural linewidth of the $X\rightarrow C$ transition is ${\approx}2\pi\times0.3$~MHz, however the usable molecules had a ${\approx}\pm0.7$~m/s transverse velocity spread, corresponding to a $1\sigma$ Doppler width of ${\approx}2\pi\times1.5$~MHz at 690~nm. Because the lasers used had linewidths of ${\lesssim}1$~MHz, to completely optically pump these molecules we relied on a combination of power broadening and extended interaction time. Optical pumping occured in a magnetically unshielded region where a background field $\B\approx500$~mG was present; however, the magnetic moment of $X$ ($C$) is ${\sim}\mu_{\rm N}$ (${\approx}\mu_{\rm B}/J(J+1)$), the nuclear magneton, which led to a Zeeman shift of ${\sim}2\pi\times400$~Hz (${\lesssim}2\pi\times400$~kHz) such that the $M$ sublevels were not resolved by our lasers. The $\ket{C,J=1}$ state has an $\Omega$-doublet splitting of $\Delta_{\Omega,C,J=1}\approx2\pi\times51$~MHz \cite{Edvinsson1965}. This splitting scales as $\Delta_{\Omega,C,J}\propto J(J+1)$, meaning we could spectroscopically resolve the $\Omega$-doublets for all $\ket{C,J}$. In addition, having no $\E$-field present meant that the $M$ sublevels of $C$ and $X$ remained unresolved and the energy eigenstates remained parity eigenstates. The $X$ state is also insensitive to $\E$-fields due to the lack of $\Omega$-doublet substructure; opposite parity states are separated by ${\sim}10$~GHz and were hence unmixed. Laser beams with linear polarisation alternating between $\hat{x}$ and $\hat{y}$ were used to ensure that all population in $\ket{X,J=2,3}$ was addressed. This was achieved by directing around 10 passes of the beam, offset in $x$, through the vacuum chamber, passing through a quarter-wave plate twice in each pass, over a distance of around 2~cm. The laser light for rotational cooling was derived from home-built extended cavity diode lasers (ECDLs). The lasers were frequency-stabilised using a scanning transfer cavity with a computer-controlled servo \cite{YuliaThesis}. Frequency-doubled light at 1064~nm from a frequency-stabilised Nd:YAG laser, locked to a molecular iodine line via modulation transfer spectroscopy \cite{FarkasThesis}, provided the reference for the transfer cavity. After this first stage of rotational cooling, there was significantly greater population in the $\ket{X,J=0}$ state than in any of the $\ket{X,J=1,M}$ sublevels. We obtained a ${\approx}25~\%$ increase in the $J=1$ population by applying a continuous microwave field, resonant with the $J=0\rightarrow J=1$ transition; a sufficiently high microwave power combined with the inherent velocity dispersion of the molecule beam led to an equilibration of population between the coupled levels \cite{SpaunThesis}. In this second stage of rotational cooling it was empirically observed that applying an electric field to lift the $M_y$ sublevel degeneracy was necessary to obtain the increased population in $\ket{X,J=1}$. A pair of copper electric field plates (spacing $\approx4$~cm) provided a field of ${\approx}40$~V/cm in the $\hat{y}$ (vertical) direction. We applied microwaves resonant with the Stark-shifted $\ket{J=0}\rightarrow\ket{J=1,M_y=0}$ transition at a frequency of $2\pi\times19.904521$~GHz from an \emph{ex vacuo} horn. Between the rotational cooling and spin-precession regions of the experiment (see figure~\ref{fig:apparatus_overview}) there was not a well-defined quantisation axis, and we observe that the populations of the $\ket{J=1,M}$ magnetic sublevels were equalised by the time the molecules reached the state preparation region. Overall, we find that rotational cooling provided a factor of between 1.5 and 2.0 increase in the molecule fluorescence signal $F$ in the state readout region. This gain factor was observed to vary slowly over time, possibly due to variations in the rotational temperature of the molecule beam, with significant changes sometimes observed when the ablation target was changed. \subsubsection{State Preparation and Readout} \hspace*{\fill} \\ \label{sec:state_prep_read} Following rotational cooling, the molecular beam passed into the spin-precession region, where the molecules experienced a nominally uniform electric field, $\vec{\E}$, which was nominally collinear with a magnetic field, $\vec{\B}$. Note that since neither of the states $X^1\Sigma^+$ nor $A^3\Pi_{0+}$ have $\Omega$-doublet structure, parity remained a good quantum number for these levels for the small (${\sim}100$~V/cm) electric fields we applied. We transferred the molecules into the $H$ electronic state via optical pumping, as illustrated in figure~\ref{fig:op_sublevels}. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{op_sublevels.pdf} \caption{Schematic of the optical pumping scheme used to populate the $H$ state. Spontaneous decay to the $H$ state (green arrows) led to an incoherent mixture of all indicated levels. See main text for detailed explanation.} \label{fig:op_sublevels} \end{figure} A 943~nm laser beam nominally propagating along $\hat{z}$ excited molecules from the $\ket{X,J=1}$ to $\ket{A,J=0}$. The laser beam passed through a quarter-wave plate, was retroflected and offset in $x$, then passed again through the quarter-wave plate, such that the molecules were pumped by two spatially separated laser beams of orthogonal polarisations, allowing all population in both the $\ket{X,J=1,M=\pm1}$ levels to be excited. After excitation to $A$, the molecules could spontaneously decay into the $\ket{H,J=1}$ manifold of states. We observed a transfer efficiency from $X$ to $H$ of ${\approx}0.3$ \cite{SpaunThesis}. In this decay, five out of the six sublevels were populated; 1/6 of the population decayed to each of $\ket{H,M=\pm1,\Nsw=\pm 1}$ and 1/3 to $\ket{H,\Psw=-1,M=0}$ (see sections~\ref{sec:tho_molecule} and \ref{sec:Measurement_scheme} for definitions of $\Nsw$ and $\Psw$); decay to $\ket{H,\Psw=+1,M=0}$ is forbidden. Of these five populated states, only one corresponded to the desired initial state described by equation~\ref{eq:dark_state}, and only 1/6 of the population in the $H$ state was in this desired state. We estimated a total transfer efficiency from $\ket{X,J=1,M=\pm 1}$ to the state in equation~\ref{eq:dark_state} of $30\%\times1/6=5\%$. The 943~nm laser light was derived from a commercial ECDL and then amplified by a commercial tapered amplifier\footnote{Toptica DL Pro and BoosTA.}, generating $\approx400$~mW. As with the rotational cooling lasers, we verified that the power was sufficient to drive optical pumping to completion across the entire transverse velocity distribution of the molecular beam. This laser was also stabilised via the previously described (section~\ref{sec:rotcool}) transfer cavity. The frequency of the laser light was monitored every 30--60 mins by scanning across the molecular resonances, allowing for independent fine-tuning and compensation of long-term frequency changes (${\lesssim}2\pi\times100$~kHz per half hour) due to e.g. temperature drifts in the cavity. Around 1~cm downstream of the optical pumping laser beam that transferred population to $H$, we prepared the initial state of $H$ (equation~\ref{eq:dark_state}) by driving the transition between $\ket{H,M=\pm1,\Nsw}$ and $\ket{C,\Psw=+1}$ (see section~\ref{sec:Measurement_scheme} for more details) using laser light at 1090~nm. A distance $L=22$~cm downstream of the preparation laser, a second 1090~nm laser beam was used to read out the molecule state via the same transition (but with the option to excite to either $\Psw$ state). This laser light was also derived from a commercial ECDL. It was then amplified using a fiber amplifier\footnote{Keopsys KPS-BT2-YFA-1083-SLM-PM-05-FA.}, increasing the power to ${\approx}250$~mW. AOMs were then used to split and frequency shift the light to address both $\Nsw$ states in the $H$ state, allowing spectroscopic selection of molecular alignment, and of both $\Psw$ levels in the $C$ state. Switching between these frequencies was achieved with either RF switches\footnote{Mini-Circuits ZYSWA-2-50DR.} or a DDS synthesizer\footnote{Novatech 409B.}. Given the linear Stark shifts $D_1\E\approx2\pi\times146$~MHz ($2\pi\times37$~MHz) in $H$ with an applied electric field strength $|\E|=141$~V/cm (36~V/cm), and the excited state $\Omega$-doublet splitting $\Delta_{\Omega,C,J=1}\approx50$~MHz in $C$, these transitions were spectroscopically well-resolved. We fixed the nominal frequency of the state preparation laser to only address $\Psw=+1$, but periodically switched the state readout laser frequency to address $\Psw=\pm1$ (${\sim}1$~min period). The transition frequencies of the state preparation and state readout laser beams were changed synchronously to always address the same $\Nsw$ level, with a switch between $\Nsw$ levels every 0.5~s. The state preparation and readout laser beams were then independently amplified with a pair of fiber amplifiers\footnote{Nufern PSFA-1084-01-10W-1-3.}, providing ${\sim}3$--4~W of power. Immediately before interrogating the molecules, the polarisation of the state readout laser beam was rapidly (100~kHz) switched between two orthogonal linear polarisations. The scheme for producing the $\Nsw$ and $\Psw$ switches, and this fast polarisation switch, together with the corresponding laser transitions, is shown in figure~\ref{fig:HC_transitions_setup}. We now describe in detail how the appropriate frequency laser light was produced. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{HC_transitions_setup.pdf} \caption{Top: transitions addressed during state preparation and readout (not to scale). The grey arrow represents the ECDL output frequency, $\omega_0$, not resonant with any transition and referenced from halfway between the two $H$ state $\Omega$-doublets. Bottom: simplified schematic of how we produced light at the appropriate frequencies. AOM-induced frequency shifts are denoted in the corresponding boxes. Bifurcation of grey lines represents light being split equally. Multiple lines represent different frequencies; only one frequency is used at once. Dashed grey lines represent a continuation of the optical path. AOMs to perform switching between $\Nsw$ states; switching between $\Psw$ states and adding relative detuning $\Delta$; tuning Rabi frequency $\Omega_{\rm r}$; and performing polarisation switching are shown. The setup shown is used with $\E=142$~V/cm and changes slightly if a different value of $\E$ is used. For a full description, consult the main text.} \label{fig:HC_transitions_setup} \end{figure} \clearpage Light from the ECDL was amplified and split equally, passing to two AOMs which produced shifts $\pm\omega_{\rm L}^{\mathcal{N}}$ where $\omega_{\rm L}^{\mathcal{N}}$ is half the splitting between the two $\Nsw$ states; these AOMs were switched on and off to perform the $\Nsw$ switch. The two frequency-shifted beams were combined and overlapped. For state preparation (lower branch of diagram), another AOM shifted the light by $+\wLs{1}$, into resonance with the lower $\Omega$-doublet in $C$ ($\Psw=+1$). This light was then amplifed again and passed through an AOM to vary the power (used as a systematic check). For the state readout (upper branch of diagram), a single AOM switched frequency to produce shifts $+\wLs{2,3}$ for the two $\Psw$ states. A relative detuning between state preparation and readout laser beams (not shown) was also implemented with this AOM. (Shifts common to both beams were made by changing $\omega_0$.) The light was then amplified again and passed through an AOM to vary the power. Finally, polarisation switching was achieved with two AOMs switched on and off at 100~kHz, $\pi$ out of phase with each other; light not diffracted (and frequency shifted by $-\wLs{\rm PS}$) by the first AOM was diffracted (and also frequency shifted by $-\wLs{\rm PS}$) by the second AOM. The diffracted light from each path was combined on a polarising beam splitter such that the linear polarisation of the final output beam alternated. Based on the notation above we can now write the components of the frequencies of the state preparation and readout laser beams which do not reverse with any experimental switch as $\omega_{\rm L,prep}^{\rm{nr}}=\omega_{\rm L,0}+\omega_{\rm L,1}$ and $\omega_{\rm L,read}^{\rm{nr}}=\omega_{\rm L,0}+(\omega_{\rm L,2}+\omega_{\rm L,3})/2-\omega_{\rm L,PS}$, respectively. We can also write the $\Psw$-correlated frequency component of the state readout laser as $\omega_{\rm L,read}^{\P}=(\omega_{\rm L,2}-\omega_{\rm L,3})/2$. We then write the detuning components as $\Delta_i=\omega_{{\rm L},i}-\omega_{HC}$ where $i\in\left\{\mathrm{ prep},X,Y\right\}$ indexes the laser and $\omega_{HC}$ is the transition frequency between the line centres of the $\ket{H,J=1}$ and $\ket{C,J=1}$ manifolds\footnote{Note that this can in principle vary between different laser beams (denoted with the subscript $i$) if there is a relative pointing between them, which produces a relative Doppler shift, but we ignore this effect in our current treatment.}. We can rewrite this overall detuning in terms of various switch parity components: \begin{align} \Delta_{i}=&\omega_{{\rm L},i}-\omega_{HC,i}\\ =&\left(\omega_{{\rm L},i}^{\rm{nr}}+\Nsw\wL^{\N}+\Psw\wL^{\P}\delta_{i,\left\{X,Y\right\} }\right)-\left(\omega_{HC}^{\rm{nr}}+\Nsw D_1\left|\E(x_{i})\tilde{\E}+\E^{\rm{nr}}(x_{i})\right|-\frac{1}{2}\Delta_{\Omega,C,J=1}\Psw\delta_{i,\left\{ X,Y\right\} }\right)\\ =&\Delta_{i}^{\rm{nr}}+\Nsw\Delta_{i}^{\N}+\Nsw\Esw\Delta_{i}^{\N\E}+\Psw\Delta_{i}^{\P}\delta_{i,\left\{ X,Y\right\}}. \label{eq:detuningcorrelations} \end{align} In the above equations we have defined detuning components of given switch parities --- we shall now explain each component in turn. $\Delta_{i}^{\N}=(\wL^{\N}-D_1\E(x_{i}))$ is the mismatch between the Stark shift $D_1\E(x_{i})$ and the AOM frequency $\wL^{\N}$ used to switch between resonantly addressing the two $\Nsw$ states, where $x_i$ is the $x$ position of laser beam $i$. $\Delta_{i}^{\N\E}=D_1\E^{\rm{nr}}(x_{i})$ is a detuning component correlated like an eEDM signal which is due to a non-reversing component of the applied electric field. To understand this relation, consider figure~\ref{fig:Enr_wNE}. Recall that $\Delta_{\Omega,C,J=1}$ is the $\Omega$-doublet splitting of the $C$ state. \begin{figure}[!ht] \centering \includegraphics[width=8cm]{Enr_wNE.pdf} \caption{Illustration of $\Delta^{\N\E}$ arising from a non-reversing electric field $\E^{\rm{nr}}$. Dashed lines show energy levels in the presence of $\E^{\rm{nr}}$. Colours indicate if the the laser shown in dark red is blue- or red-detuned from the transition.} \label{fig:Enr_wNE} \end{figure} For a $\E^{\rm{nr}}\ne0$, $|\E|$, and hence the splitting between the $\Nsw$ levels in $H$, depends on $\Esw$. If the laser frequency for each $\Nsw$ is set assuming $\E^{\rm{nr}}=0$, a nonzero $\E^{\rm{nr}}$ leads to blue or red detuning from resonance, correlated with $\Esw$. Because the sign of the Stark shift is correlated with $\Nsw$, the resulting detuning is also correlated with $\Nsw$. $\Delta_{i}^{\P}=\wL^{\P}+\Delta_{\Omega,C,J=1}/2$ is the mismatch between the excited state parity splitting and the AOM frequency, $\wL^{\P}=(\omega_{\rm L,3}-\omega_{\rm L,2})/2$, used to switch between the two states ($\delta_{i,\left\{ X,Y\right\}}$ is the Kronecker delta, 1 if $i=X$ or $i=Y$, zero else). We observed that $\Delta^{\N}$ ($\Delta^{\P}$) was typically less than $2\pi\times20$~kHz ($2\pi\times50$~kHz). Although we could measure $\Delta^{\N}$ with ${\sim}2\pi\times1$~kHz precision, fluctuations in the Stark splitting, likely caused by thermally-induced fluctuations of the field plate spacing, limited our ability to zero out this correlated detuning. We define $\Delta^{\rm{nr}}=(\Delta^{\rm{nr}}_{{\rm prep}}+(1/2)(\Delta^{\rm{nr}}_{X}+\Delta^{\rm{nr}}_{Y}))/2$ as the average non-reversing detuning of the state preparation and readout laser beams; its value typically fluctuated by ${\sim}2\pi\times0.1$~MHz over several hours. Every 30--60 minutes the value of $\Delta^{\rm{nr}}$ was scanned across the molecular resonance in the readout region using the $\Delta$-tuning AOM (see figure~\ref{fig:HC_transitions_setup}), as an auxiliary optimisation. $\Delta^{\rm{nr}}$ was set to the value where the fluorescence signal was maximum. This ensured that the average detuning of the state readout laser beams, $(\Delta_X^{\rm nr}+\Delta_Y^{\rm nr})/2$, was zero, however, if the state preparation and readout laser beams were not exactly parallel, there could be a difference between $\Delta^{\rm{nr}}_i$ due to the resulting difference in Doppler shifts. The effect of a detuning difference between the two state readout polarisations $\Delta^{XY}=(\Delta^{\rm{nr}}_{X}-\Delta^{\rm{nr}}_{Y})/2$ is discussed in section~\ref{ssec:asymmetry_effects}. Additionally, each day we scanned the frequency of the preparation laser across the molecule resonance while monitoring the contrast of our fluorescence signal to ensure $\Delta^{\rm{nr}}_{{\rm prep}}$ was kept below $2\pi\times0.2$~MHz (an example scan is shown in figure~\ref{fig:contrast}). The ways in which detuning components can contribute to systematic errors are discussed in detail in sections~\ref{sssec:AC_stark_shift_phases} and \ref{sssec:correlated_laser_parameters}. Other polarisation switches of the state preparation and readout laser beams ($\Rsw$ and $\Gsw$) were controlled independently via half-wave plates mounted in high resolution rotation stages\footnote{Newport URS50BCC.}. These switches and their use in the experiment are described in detail in section~\ref{sec:data_analysis}. Both beams were shaped using cylindrical lenses to be extended in $y$ so all molecules in the beam were addressed. The Gaussian standard deviations of the beam intensities were 1.1~mm and 7.5~mm in the $x$ and $y$ directions, respectively \cite{SpaunThesis}. The preparation laser beam was temporally modulated at $50$~Hz with a chopper wheel, synchronous with the molecule beam pulses, to minimise the incident power on the field plates so as to reduce an important systematic error, described in sections~\ref{sssec:AC_stark_shift_phases} and \ref{sssec:polarization_gradients_from_thermal_stress_induced_birefringence}. \subsubsection{Electric Field} \hspace*{\fill} \\ \label{ssec:efields} The applied $\E$-field was generated with a pair of 43~cm~$\times$~23~cm parallel conducting plates composed of ${\approx}1.25$~cm thick Borofloat glass, coated with a ${\sim}200$~nm layer of indium tin oxide on the inner faces\footnote{The plates were fabricated by Custom Scientific, Inc.}. The plates were transparent to the $X\rightarrow A$ optical pumping laser (943~nm), the $H\rightarrow C$ state preparation and readout lasers (1090~nm), and the $C\rightarrow X$ molecule fluorescence (690~nm). The outside faces of the electric field plates were prepared with a broadband anti-reflection coating with a specified \textless1\% reflectivity at normal incidence from 600--1000~nm. The plates were made much larger than the precession region in order to minimise inhomogeneity of the field through which the molecules passed, and to enable large solid angle collection of fluorescence through the plates. One of the field plates was mounted in an aluminium frame fixed to the base of the vacuum chamber. The other field plate was secured a distance of 2.5~cm away in a kinematic aluminium frame. On the inward-facing surfaces, a frame of gold-plated copper clamped each field plate to the aluminium mounts and also functioned as a `guard ring' electrode, suppressing the effect of fringing fields near the edges of the plate. The field plates were protected from impinging molecular beam particles by a $1~\mathrm{cm}\times 1~\mathrm{cm}$ square collimator fixed to the entrance of the assembly. The applied electric field was controlled by a 20-bit DAC, amplified to produce up to $\pm200$~V\footnote{PA98A Power OpAmp.}. The field plate assembly was referenced to the vacuum chamber ground. Equal and opposite voltages, $\pm V$, were applied to each side of the assembly. The direction of the field (the $\Esw$ switch) was reversed every 1--2~s by reprogramming the output of the DAC channels to reverse their polarity. The configuration of the electrical connections between the amplified voltage and the field plates, denoted by $\Lsw$, was reversed via a pair of mercury-wetted relays every 2.6~minutes\footnote{Note that $\Lsw$ constitutes a reversal of the supply voltages as well as a reversal of the leads connecting the power supply to the field plates, such that $\Esw$ is unchanged.}. Data were also taken with two different values of $\mathcal{E}=36$ and 141~V/cm, varied on a ${\sim}1$~day time scale. We measured the homogoneity of the electric field in a number of ways which we shall describe in turn now. Firstly, an indirect measure was obtained by determining the spatial variation of the field plate separation $d$ using a `white light' Michelson interferometer \cite{Patten1971}. A schematic of the setup is shown in figure~\ref{fig:Interferometer_setup}. \begin{figure}[!ht] \centering \includegraphics[scale=0.4]{interferometer_setup_inset.pdf} \caption{\label{fig:Interferometer_setup}Schematic of the apparatus used to perform an interferometric measurement of the electric field plate separation. A spectrally broad light beam is reflected perpendicularly off the field plates and passes into a conventional Michelson interferometer setup with one fixed arm (length $L_2$) and one movable arm (length $L_1$). An example of a pair of beam paths of interest is shown as solid and dashed red lines. If the two paths are slightly tilted relative to each other, a spatial interference pattern (inset) is observed on the CCD detector when the path length difference between the two beams is less than the coherence length, e.g. $L_1+d-L_2<L_{\rm c}$.} \end{figure} We directed a light beam at normal incidence through the electric field plates. This resulted in multiple reflected beams, but we restrict discussion to the reflections from the conducting surfaces as these are of primary interest and were efficiently experimentally isolated from all others. The reflected beams passed into a Michelson interferometer with one arm of fixed length ($L_2$) and one with length adjustable via a micrometer ($L_1$). Constructive (destructive) interference occured whenever the lengths of two reflected beam paths differed by an integer (odd half-integer) multiple of the wavelength of the light. This condition was restricted further by the use of a broadband superluminescent diode\footnote{QPhotonics QSDM-680-2.} with a short coherence length $L_{\rm c}$ (nominally $L_{\rm c}\approx15~\upmu$m). Thus the interference was only substantial when the two beams differed in length by ${\lesssim}L_{\rm c}$. This occurred when $L_1=L_2$ (for reflections off the same surface) or when $L_1=L_2\pm d$ (for reflections off surfaces spaced by $d$). The case where both beams reflected off the same surface was used as a reference to determine the position $L_1=L_2$. A measure of this interference was achieved by producing a spatial interference pattern (inset figure~\ref{fig:Interferometer_setup}) through a slight tilting of the arms of the interferometer. Analysis of the spatial Fourier components of the resulting interference pattern provided a quantitative measure of the interference fringe contrast; a plot of contrast vs.\ arm position $L_1$ yielded a peak with width $\delta L_1\approx L_{\rm c}$. By performing this analysis while varying the path length $L_1$, the plate separation was deduced. This entire procedure was then performed over a range of transverse ($x,y$) positions on the field plates. The resulting data are shown in figure~\ref{fig:Interferometer_data}. \begin{figure}[!ht] \centering \includegraphics[width=14cm]{Interferometer_data.pdf} \caption{Variation in the electric field plate separation as measured by the interferometric method. The left-hand plot shows the variation with $x$, the molecule beam direction, at two different values of $y$. The right-hand plot shows the variation with the $y$ (vertical) position at three different values of $x$. The coordinate origin is at the nominal centre of the plates. The shaded regions indicate the approximate extent of the molecular beam in the spin precession region. The change in separation is quoted relative to a common offset with an estimated error of $\pm0.5~\upmu$m. The mean separation over all $x$ is 25.00~mm.\label{fig:Interferometer_data}} \end{figure} This measurement clearly showed a bowing of the electric field plates; the plate separation varied approximately quadratically with the position in $x$. This is shown in the left-hand plot of figure~\ref{fig:Interferometer_data}. In the $\hat{x}$ direction we observed a maximum variation in the plate separation of around 20~$\upmu$m. We saw a roughly 80~$\upmu$m variation in the $\hat{y}$ (vertical) direction but note that the collimated molecular beam extended only over $\pm5$~mm in $y$ so the biggest plate spacing variation at a given $x$ was ${\approx}10~\upmu$m. From these measurements and a typical applied voltage of $V=\pm177$~V, we expected $\E$ to vary by around 100~mV/cm in the $\hat{x}$ direction and ${\lesssim}15$~mV/cm in the $y$ direction in the region sampled by the molecules. The indirect measurements of the spatial variation of the applied electric field provided by interferometric mapping of the field plate separation were later corroborated by direct measurements of $\vec{\E}(x)$. Spatial variation of $\vec{\E}$ could lead to the accumulation of geometric phases during the spin precession measurement \cite{Vutha2009}. There are known mechanisms by which such phases can contribute to eEDM-like systematic errors, as described in section~\ref{ssec:E_correlated_phase}, though simple estimates show that these effects are several orders of magnitude below the sensitivity of this measurement. However, additional $\E$-field imperfections such as non-reversing fields, due to e.g.\ variations in the ITO coating, which could produce patch potentials, are known to contribute to eEDM-like systematic errors and are only revealed by more direct measurements of the electric field, which we will now describe. We can write the electric field present in the precession region in the following manner: \begin{equation} \vec{\E}\cdot\hat{z}=\E\Esw+\E^{\rm{nr}}+\E^{\Ld}{\Lsw}+\E^{\E\Ld}\tilde{\E}{\Lsw}, \end{equation} where, as usual, $\Esw={\rm sgn}(\hat{z}\cdot\vec{\E})$ is the direction of the field in the spin-precession region and $\Lsw$ represents the binary state of the physical leads connecting the voltage supply to the field plates. The terms on the right-hand side are: $\E\Esw$, the intentionally applied electric field; $\E^{\rm{nr}}$, a non-reversing electric field; $\E^{\Ld}$, a non-reversing electric field component from the power supply that can be reversed by switching $\Lsw$; and $\E^{\E\Ld}\tilde{\E}{\Lsw}$, a component of the applied field that is reversed by switching $\Esw$ or $\Lsw$. We directly measured the components of $\E$ using the molecules themselves, in three different ways. The first method used Raman spectroscopy, driving a two-photon Lambda-type transition between $\Nsw$ levels in $\ket{H,J=1}$ as shown in figure~\ref{fig:Raman_transition}. The Raman transfer was performed at positions between, but close to, the state preparation and readout laser beams, where there was sufficient optical access. The procedure was as follows: first, an $\hat{x}$-polarised state preparation laser beam depleted a superposition $\ket{B(\hat{x},\Nsw=+1,\Psw=+1)}$ (recall $\ket{B}$ is the bright state as defined in section~\ref{sec:Measurement_scheme}) by exciting it to the $C$ state. Next, at a point downstream, two co-propagating, $\hat{x}$-polarised Raman beams were used to repopulate this depleted superposition by driving population from the other $\Nsw$ state, via the transition $\ket{B(\hat{x},\Nsw=-1,\Psw=+1)}\rightarrow \ket{C,\Psw=+1}\rightarrow\ket{B(\hat{x},\Nsw=+1,\Psw=+1)}$. The frequencies of the two Raman beams were tuned with a pair of AOMs. The state readout laser then addressed the same transition as the preparation laser and excited the repopulated superposition to the $C$-state from which it spontaneously decayed back to $X$ and fluoresced at 690~nm. \begin{figure}[!ht] \centering \includegraphics[width=8cm]{Raman_transition.pdf} \includegraphics[width=8cm]{raman_fits.pdf} \caption{Left: Schematic of the Raman-type transition used to perform a measurement of the $\E$-field in the spin-precession region. The pairs of red arrows represent the one-photon transitions driven by linearly polarised light, addressing superpositions of $M=\pm1$. The single-photon detuning is given by $\Delta+\delta/2$ and the two-photon detuning is given by $\delta/2$. $D_1|\mathcal{E}|$ is the magnitude of the Stark shift due to the applied electric field. Right: Example scans for opposite $\Esw$ states obtained by varying the two-photon detuning $\delta/2$ and observing fluorescence, with Gaussian fits to the data.\label{fig:Raman_transition}} \end{figure} Efficient transfer of population between the two $\Nsw$ states occurred for zero two-photon detuning ($\delta/2$ in figure~\ref{fig:Raman_transition}). This condition was indicated by a peak in fluorescence, giving a measure of the Stark shifted energy, and hence the absolute size of the applied field, $|\E|$. This procedure was repeated for different positions of the Raman laser beams along the $\hat{x}$ direction. The non-reversing component of the electric field was found by repeating the measurement after reversing the applied voltages. An example of such a pair of scans is shown on the right of figure~\ref{fig:Raman_transition}. Using this method we measured the electric field at $x$ positions where there was sufficient optical access, i.e.\ near the state preparation and readout laser beams. The $\Esw$-correlated two-photon detuning $\delta^{\E}=2\pi\times13$~kHz ($2\pi\times11$~kHz) allowed us to extract a value of the non-reversing electric field component, $\E^{\rm{nr}}=\delta^{\E}/2D_1=-6.5\pm0.3$~mV/cm ($-5.5\pm0.3$~mV/cm), in the state preparation (readout) region. We did not observe any significant variation within the individual regions. We also observed that this non-reversing component did not vary with the size of the reversing electric field. The second method used to measure the electric field had the greatest utility because it allowed for spatially resolved measurements along $x$ in the spin precession region with comparable precision to the Raman method without perturbing the experimental apparatus. This was achieved via microwave spectroscopy. A schematic of the experimental setup is shown in figure~\ref{fig:microwave_setup}. \begin{figure}[!ht] \centering \includegraphics[scale=0.65]{microwave_transitions.pdf} \caption{The transition driven by microwaves during a measurement of the electric field. We used $\hat{y}$-polarised microwaves of frequency $2\pi\times39$~GHz to drive a rotational transition between $\ket{H,J=1}$ and $\ket{H,J=2}$. The $M=0$ levels are labelled with their parity. We applied a moderate $\E$-field such that $\Delta_{\Omega}\ll D|\E|\ll B_H$ where $B_H=0.33~{\rm cm}^{-1}$ is the rotational constant. The electric dipole moment of the $J=1$ state $D_1\approx2\pi\times1~{\rm MHz/(V/cm)}\approx 3D_2$.\label{fig:microwave_transitions}} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=0.5]{microwave_setup_new.pdf} \caption{Experimental setup for spatial measurement of $\E$ via microwave spectroscopy. A molecular pulse (grey cloud) passed between the electric field plates (light blue). The optical pumping laser beam transferred population from $\ket{X,J=1}$ to an incoherent mixture of states in $\ket{H,J=1}$ as described in section~\ref{sec:state_prep_read}. When the pulse was centred in the spin-precession region, a microwave $\pi$-pulse was applied, driving population in $H$ from $J=1$ to $J=2$ when resonant (dark blue region). The depletion efficiency out of $J=1$ was subsequently read out by laser induced fluorescence as per the normal measurement scheme described in section~\ref{sec:Measurement_scheme}. The time of arrival of the molecules in the state readout region encoded the position where they absorbed the microwaves. \label{fig:microwave_setup}} \end{figure} The measurement procedure began with optical pumping of molecules into the $H$-state. The molecules travelled through the spin-precession region until it was entirely occupied by the molecule pulse. At this time, a $\pi$-pulse of microwaves at $2\pi\times39$~GHz with nominal $\hat{y}$ polarisation was applied counter-propagating to the molecule beam. When on resonance, this transferred population from $\ket{B(\hat{y},\Nsw,\Psw)}$ to $\ket{H,J=2,M=0,\Psw}$ (excitation to (from) either $\Psw$ ($\Nsw$) state was permitted) as shown in figure~\ref{fig:microwave_transitions}. State readout was performed as usual (see section~\ref{sec:Measurement_scheme}) by optically pumping with alternating polarisations $\hat{x}$ and $\hat{y}$. The measured asymmetry (as defined in equation~\ref{eq:asymmetry}) served as a measure of the microwave transfer efficiency. The $x$ position of the molecules at the time of the microwave pulse was mapped onto their arrival time in the detection region and, with knowledge of the longitudinal molecular beam velocity, $v_{\parallel}$, could be extracted. Thus, the spatial dependence of the resonant frequency, $\omega_{\rm MW}(x)$, was provided by the time-dependence of the asymmetry, $\mathcal{A}(t)$. Due to the DC Stark shift, $\omega_{\rm MW}$ was linearly proportional to the electric field magnitude and $|\E(x)|$ could be directly extracted. We observed a resonance linewidth of ${\approx}2\pi\times25~{\rm kHz}\approx2\pi/T$ which was limited by the microwave $\pi$-pulse duration of $T=40~\upmu\rm{s}$. With our signal-to-noise, we were able to fit the resonance centre to a precision of ${\sim}2\pi\times1$~kHz, typically using ${\sim}50$ detuning values and averaging over ${\sim}50$ molecule pulses per detuning value. Example data obtained via this method are shown in figure~\ref{fig:asym_map}. \begin{figure}[!ht] \centering \includegraphics[width=15cm]{asym_map_comp.pdf} \caption{Colourmap: Plot of the asymmetry $\A$ induced by a microwave pulse as the frequency of the microwaves was scanned. Red data points: Plot of the corresponding reversing component of the electric field obtained by extracting the centre of the resonance signal. The position is relative to the centre of the spin-precession region.\label{fig:asym_map}} \end{figure} In these data, it is evident that the resonant frequency of the microwaves varied across the molecule pulse by around $2\pi\times60$~kHz. The position $x$ of the molecules at the time of the microwave pulse was assumed to be linearly related to the molecule arrival time in the state readout region. The observed spatial variation of $\mathcal{E}$ was roughly consistent with expectations based on the measured variation of the plate spacing described above. By switching $\Nsw$ and $\Esw$ between measurements of the $\E$-field we were able to extract $\E^{\rm{nr}}$ from the $\Nsw\Esw$-correlated component of $\omega_{\rm MW}$. These measurements, shown in figure~\ref{fig:enr}, were used to evaluate the corresponding systematic error in equation~\ref{eq:Enr_systematic_error_value}. \begin{figure}[!ht] \centering \includegraphics[scale=0.35]{enr_combined.pdf} \caption{A plot of the spatial variation of $\E^{\rm{nr}}$. The black points are data obtained via microwave spectroscopy. The blue points are data obtained via Raman spectroscopy. The red data point was obtained by examining the variation of contrast with $\Delta_{\rm prep}$. The approximate position of the state preparation (state readout) laser beam is shown as a red dotted line on the left (right) of the figure. For the microwave spectroscopy data the uncertainty/averaging range of the position is around 21~mm at the left-hand side of the plot and decreases to around 13~mm at the right-hand side --- see main text for details.\label{fig:enr}} \end{figure} We clearly saw a non-uniform $\E^{\rm{nr}}$ across the spin precession region. The spatial variation shown in figure~\ref{fig:enr} was reproducible for the period of several weeks over which these measurements of the electric field were taken. We are unsure as to the origin of the $\E^{\rm{nr}}$ but believe it may have been caused by patch potentials \cite{Robertson2006} present on the electric field plates. We observed unexplained disagreement between the two measurement methods (Raman spectroscopy vs.\ microwave spectroscopy), but note that both report non-reversing fields of a few mV/cm with the same sign. The mapping between arrival time in the detection region and $x$ position during the microwave pulse was approximate, suffering from spatial averaging due to a variety of effects. For example, velocity dispersion led to averaging of $dx\times\sigma_{v_{\parallel}}/v_{\parallel}$, where $\sigma_{v_{\parallel}}$ is the longitudinal velocity spread of the molecular beam and $dx$ is the distance between microwave interrogation and state readout. This averaging distance was largest, ${\approx}1.6~{\rm cm}$, at the state preparation region. Spatial averaging also occurred across the ${\approx}0.7$~cm distance traversed during the $T=40~\upmu$s microwave pulse. Finally, there was averaging of the spatial position of the molecules due to the finite size of the state readout laser beam and the polarisation switching; molecules were optically pumped (with varying probability) throughout the ${\approx}0.5$~cm wide laser beam. In addition to spatial averaging, uncertainty in the mean longitudinal velocity also contributed an uncertainty in position. Changes of ${\approx}10$~m/s between molecule pulses were quite typical over the course of the $\mathcal{E}$-field measurement, giving an estimated position uncertainty of ${\lesssim}1$~cm. By adding the above contributions in quadrature we concluded that the range of positions from which the microwave-induced signals could have originated increased from around ${\approx}1.3$~cm at the state readout beam to ${\approx}2.1$~cm at the optical pumping beam. These ranges are shown as horizontal error bars at the extrema of position in figure~\ref{fig:enr}. We used a third method to measure $\mathcal{E}$ and $\mathcal{E}^{\mathrm{nr}}$ \emph{in situ} throughout the eEDM dataset by performing `intentional parameter variation' tests with large $\Delta_{{\rm prep}}$ (denoted by `c' in figure~\ref{fig:timing}). Detuning the state preparation laser resulted in a reduction in the measured contrast $|\mathcal{C}|$ as shown in figure~\ref{fig:contrast}~(B). Setting $\Delta_{{\rm prep}}\approx\pm2^{\:}\mathrm{MHz}$ gives $\left|\mathcal{C}\right|\approx0.5$, and the contrast was then approximately linearly proportional to $\Delta_{\rm prep}$ with a sensitivity of about $1/\gamma_{C}\approx1/(2\pi\times2~{\rm MHz})$. Any variation in the electric field would change the Stark shift, and thus also $\Delta_{\rm prep}$, resulting in a change in contrast. Thus, using the previously described spin precession scheme, we indirectly measured parity components of the electric field from the appropriate parity components of the contrast: \begin{align} D_1\mathcal{E}^{\mathrm{nr}}(x_{\mathrm{prep}})\approx&\frac{\partial\Delta_{\mathrm{prep}}}{\partial\mathcal{C}}\mathcal{C}^{\mathcal{NE}}\\ D_1\mathcal{E}(x_{\mathrm{prep}})\approx&\omega_{\rm L}^{\mathcal{N}}+\frac{\partial\Delta_{\mathrm{prep}}}{\partial\mathcal{C}}\mathcal{C}^{\mathcal{N}}. \end{align} We looked for variation of $\mathcal{E}$ or $\mathcal{E}^{\mathrm{nr}}$ every 3--4 hours. Measurements of $\mathcal{E}^{\mathrm{nr}}$ were consistent with the microwave measurements, with a constant value $\mathcal{E}^{\mathrm{nr}}(x_{\mathrm{prep}})=-4.8\pm0.9^{\:}\mathrm{mV/cm}$. However, the mismatch $\Delta^{\mathcal{N}}=D_1\mathcal{E}-\omega_{\rm L}^{\mathcal{N}}$ between the Stark shift $D_1\mathcal{E}$ and the $\Nsw$-correlated laser frequency shift, $\omega_{L}^{\mathcal{N}}$, was found to drift significantly on the scale of around $2\pi\times20^{\:}\mathrm{kHz}/\mathrm{day}$. This drift of $\Delta^{\N}$ was servoed by tuning $\omega^{\N}_{\rm L}$ after each measurement, ensuring $\left|\Delta^{\mathcal{N}}\right|<2\pi\times30$~kHz at all times \cite{SpaunThesis}, see sections \ref{sssec:correlated_laser_parameters} and \ref{ssec:laser_imperfections} for more details. \subsubsection{Magnetic Fields} \hspace*{\fill} \\ \label{sec:bfields} Our experimental scheme did not require the application of a magnetic field. This was not the case with some previous eEDM experiments, where the magnetic field was used to define a quantization axis \cite{Commins1994,Regan2002}, or to cause the precession of spin to a direction associated with maximum sensitivity \cite{Hudson2011,Kara2012}. Instead we used the electric field to define a quantization axis, and we used the relative polarisations of the state preparation and readout lasers to define the basis in which we read out the electron's spin precession with maximal sensitivity. However, we regularly applied a magnetic field $\B$ in order to perform searches for systematic errors. The phase accumulation induced by an eEDM $\delta d_{e}\approx5\times10^{-29}$~$\ecm$ would have the same size as a Zeeman phase produced by a magnetic field of $\mathcal{B}\approx0.2~\upmu{\rm G}$, which is small compared to some of the magnetic field imperfections in the experiment. However, phases associated with magnetic-field-induced precession were distinguished from eEDM-induced precession by the use of the switches at our disposal (e.g.~electric field reversal). Nevertheless, it was important to investigate, quantify and minimize the effects of such magnetic fields, as they could have coupled with other experimental imperfections to give eEDM-like phases. Under normal operating conditions we ran the experiment at three different magnetic field magnitudes, corresponding to a relative precession phase of $\phi^{\mathcal{B}}\approx q\frac{\pi}{4}$ for $q=0,1,2$. The required $z$-component of the field was then $\mathcal{B}_z=q\mathcal{B}_0\tilde{\mathcal{B}}$, where $\B_0=\frac{\pi}{4}\frac{1}{g_1\mu_{\rm B}\tau}\approx 20~{\rm mG}$. We also had the ability to apply transverse magnetic field components along $\hat{x}$ and $\hat{y}$, and all five linearly independent first-order gradients. The various coils that we used are illustrated in figure~\ref{fig:coil_schematic}. \begin{figure}[!ht] \centering \includegraphics[scale=0.25]{coil_schematic.pdf} \caption{A schematic of the magnetic field coils used. The main coils consisted of rectangular cosine coils (orange) wound on the surface of a cylindrical plastic frame together with additional end coils (red) to correct for the low aspect ratio (length/diameter) in our system; a second set of these coils, mirrored in the $xy$ plane, is not shown. Also wrapped around this frame are a pair of circular auxiliary coils shown in yellow. The other auxiliary coils are shown in blue and green and consist of rectangular coils above and below the vacuum chamber. See the main text for descriptions of the functions of all of the coils.\label{fig:coil_schematic}} \end{figure} The primary magnetic field, $\mathcal{B}_z$, was produced by two sets of rectangular coils, shown in orange in figure~\ref{fig:coil_schematic}. These were wound on the surface of two hemicylindrical plastic shells, on the $\pm z$ sides of the spin-precession region. The coils were designed to maximize field uniformity and minimize distortion due to the boundary conditions imposed by the magnetic shielding. It was also possible to apply a $\partial\mathcal{B}_z/\partial z$ gradient with these coils. Two end coils (red in figure~\ref{fig:coil_schematic}), located on the $\pm x$ ends of the spin-precession region, enhanced the uniformity of the $\B$-field along $x$ and enabled application of a $\partial\B_z/\partial x$. The main coils were powered by two separate commercial power supplies\footnote{Krohn-Hite 521/522}, and the end coils were powered by custom power supplies. The current flowing through these coils was continuously monitored throughout the course of the experiment by measuring with a digital multimeter the voltage dropped across precision resistors. We used three sets of auxiliary magnetic field coils in systematic error searches. A pair of circular Helmholtz coils (yellow in figure~\ref{fig:coil_schematic}) were wrapped around the same frame used for the main coils and were formed from ribbon cable. They provided a magnetic field in the $\pm\hat{x}$ directions and could also provide a $\partial\B_x/\partial x$. Above and below the spin-precession region chamber ($\pm y$) there were four sets of rectangular coils (blue and green in figure~\ref{fig:coil_schematic}). These allowed us to produce a field in the $\pm\hat{y}$ directions as well as all three associated first-order gradients. Note that the three first-order magnetic field gradients that we could not apply could be inferred from Maxwell's equations. A summary of the fields that we could apply is given in table~\ref{tab:coils_table}. \begin{centering} \begin{table} \caption{A summary of the magnetic fields and magnetic field gradients that we could produce. The coil colours refer to figure~\ref{fig:coil_schematic}.\label{tab:coils_table}} \begin{tabular}{ccc} \br Coil colour & Fields produced & Field gradients produced\\ \mr Orange & $\B_z$ & $\partial \B_z/\partial z$\\ Red & $\B_z$ & $\partial \B_z/\partial x$, $\partial \B_z/\partial z$\\ Yellow & $\B_x$ & $\partial \B_x/\partial x$\\ Blue & $\B_y$ & $\partial \B_y/\partial y$, $\partial \B_y/\partial z$\\ Green & $\B_y$ & $\partial \B_y/\partial x$, $\partial \B_y/\partial y$\\ \br \end{tabular} \end{table} \end{centering} Several measures were taken to minimize stray magnetic fields affecting the molecules. The simplest was to ensure no magnetized objects were placed within the spin-precession region. To ensure this, all components were fabricated from non-magnetic materials (e.g.\ no stainless steel). The magnetization of all objects was also checked before installation by passing them across an AC-coupled magnetometer sensitive to 0.1~mG field variations. The ambient $\B$-field in the laboratory was dominated by that from the Earth's core (${\sim}500~$mG approximately along $\hat{x}+\hat{y}$). To suppress this and other DC/low-frequency fields, the spin-precession region was surrounded by a set of five concentric cylindrical magnetic shields constructed from ${\approx}1.6$~mm thick mu-metal\footnote{Amuneal Inc.}. Each layer of shielding should have provided around a factor of 10 reduction in the DC magnetic field \cite{Vutha2010}; however, residual magnetisation of the mu-metal was found to limit the field components to $\gtrsim20~\upmu$G for $\mathcal{B}_x$ and $\mathcal{B}_y$, and $\gtrsim500~\upmu$G for $\mathcal{B}_y$. \footnote{We later found that the residual $\mathcal{B}_y$ could be reduced to a level comparable to $\mathcal{B}_x$ and $\mathcal{B}_z$ by performing degaussing with a higher current.} Each shielding layer was divided into two half-cylinders and two end caps. The outermost (innermost) shield was 132~cm (86~cm) long and had a diameter of 107~cm (76~cm). These shields had holes to allow lasers to pass through in the $z$ direction, and to accommodate the molecule beam. There were also holes for the light pipes to extract molecule fluorescence, and some electric connections, in the $x$ direction. Measurements and simulations showed that these holes had a negligible impact on the shielding efficiency. The shielding factor remained approximately constant up to an AC frequency $\sim2\pi\times3$~GHz for which the wavelength becomes comparable to the size of any apertures in the shields, ${\sim}10$~cm, and the magnetic field noise starts to penetrate the shields. However, our measurement was only sensitive to magnetic field noise at frequencies up to roughly the inverse of the spin precession time $1/\tau\approx2\pi\times1$~kHz \cite{VuthaThesis}. The aluminium vacuum chamber also shielded AC magnetic noise above a frequency ${\sim}1/\pi\sigma t^2\mu\approx2\pi\times100$~Hz, where $\sigma\approx3.5\times10^7$~S/m is the electrical conductivity, $t\approx1$~cm is the thickness and $\mu$ is the permeability ${\approx}\mu_0$, the vacuum permeability \cite{Mager1969,Sumner1987}. The relatively large ($\B\sim10$~mG) fields applied by the $\B_z$ coils caused the inner magnetic shields to become slightly magnetized, inducing a non-reversing magnetic field, $\B^{\rm{nr}}\approx30~\upmu{\rm G}$. In order to suppress this remanent field we performed a degaussing procedure on the magnetic shields by passing a $200$~Hz sinusoidal current through sets of loosely wound ribbon cable coils which wrapped axially (in the $xy$ plane at $z=0$) between the shield layers. The maximum current amplitude was 1~A, sufficient to drive the mu-metal to saturation, and the amplitude was decreased with an exponential envelope over a period of 1~s. To fully degauss all layers of the magnetic shielding takes around 4~s. There was also a 1~s period of `dead time' during which the main magnetic field was turned back on and allowed to settle. This degaussing procedure was repeated every time the applied magnetic field was changed, which occured approximately every 40~s. Variations in the magnetic fields present were continuously measured throughout the experimental procedure. This was achieved using a set of four three-axis fluxgate magnetometers\footnote{Bartington Mag-03.}, which were mounted in a tetrahedral configuration outside the spin-precession region vacuum chamber (but inside the magnetic shielding). We also used an additional fluxgate magnetometer which was positioned at a distance of around 1~m from the apparatus and outside of the magnetic shielding. By continuously recording the measurements provided by these magnetometers we were able to search for correlations of our data with the magnetic field present. In particular, we checked for the presence of a magnetic field correlated with the electric field, $\mathcal{B}^{\E}$, which would have been characteristic of a leakage current flowing between the electric field plates --- an effect known to contribute a significant systematic error in previous eEDM experiments \cite{Regan2001,Kara2012}. \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{probulator_data.pdf} \caption{Magnetic field data taken with a flux-gate magnetometer passed along the molecular beam line. The left-hand plot shows the reversing components of field whilst a nominal $\mathcal{B}_z$ was applied. The right-hand plot shows the corresponding non-reversing components. The data are fit by polynomial curves.\label{fig:probulator_data}} \end{figure} Additional measurement of the magnetic fields was carried out by opening the vacuum system and passing a rotatable flux-gate magnetometer into the chamber. This allowed for measurement of the fields directly along the beam line. The freedom to rotate the magnetometers was crucial to distinguish between electronic offsets and $\vec{\B}^{\rm nr}$ for fields ${\lesssim}1$~mG. From these measurements we were able to directly characterise most of the magnetic fields and first-order field gradients, including non-reversing components. Example data obtained from these measurements are shown in Figure~\ref{fig:probulator_data}. We saw that the applied fields were all flat to within 1~mG, and the non-reversing components, with the exception of $\B^{\rm nr}_y$, were less than 50~$\upmu$G. Systematic uncertainty due to these fields is discussed in section~\ref{ssec:magnetic_field_imperfections}. \subsubsection{Fluorescence Collection and Detection} \hspace*{\fill} \\ \label{sec:fluorescence_collection} As previously described, our experimental data consisted of laser-induced molecule fluorescence, emitted in all directions (with a well-defined angular distribution \cite{Kirilov2013}) when the molecules were interrogated by the state readout laser beam. The apparatus for collecting this light is illustrated in figure~\ref{fig:collection_optics}. \begin{figure}[!ht] \centering \includegraphics[width=16cm]{collection_optics.pdf} \caption{Fluorescence collection apparatus. Left: The mounted electric field plates are shown together with one of the two sets of four lens doublets. The mounting for the top-left doublet has been removed to show the lenses. The fiber bundles are shown schematically, fastened into the lens tubes behind the doublets. The lens assembly was mounted on rails and the entire assembly sat on a breadboard which was fastened to the vacuum chamber. The view on the right also shows the approximate position of the state readout laser beam as it passes through the apparatus.} \label{fig:collection_optics} \end{figure} The fluorescence light passed through the transparent electric field plates, whose inner (outer) faces are ITO (anti-reflection [AR]) coated. Behind each field plate was a set of four AR-coated lens doublets, which collimated and then focussed the light. The optical axes of the doublets intersected a ray path from the centre of the fluorescing molecule region, accounting for refraction through the electric field plates. The first (second) lens of each doublet was a 75~mm\footnote{CVI Melles Griot LAG-75.0-50.0-C-SLMF-400-700.} (50~mm\footnote{CVI Melles Griot LAG-50.0-35.0-C-SLMF-400-700.}) diameter spherical lens of focal length 50~mm (35~mm). On each side ($\pm z$), each of the four lens doublets focussed light onto one of four sections of a `quadfurcated' fiber bundle\footnote{Fiberoptic Systems.} whose input ends were 9~mm in diameter and fastened in lens tubes. The output of the fiber bundle was connected to a 19~mm diameter fused quartz light pipe with optical couplant gel\footnote{Corning Q2-3067.} in between. The light pipe passed out of the spin-precession region vacuum chamber and magnetic shields and directed the light onto a PMT\footnote{Hamamatsu R8900U-20.}. Bandpass filters\footnote{Semrock FF01-689/23-25-D.} were used to suppress backgrounds from e.g.\ scattered light. Detailed tests of the light collection were carried out \cite{SpaunThesis} which estimated that ${\approx}14$~\% of the fluorescence photons were collected. The major contributions to this efficiency were the finite solid angle subtended by the collection lenses (${\approx}50~$\%), finite coupling efficiency into fiber bundles (${\approx}60~$\%) and finite coupling efficiency between the fiber bundles and the light pipes (${\approx}50~$\%). In addition, the quantum efficiency of the PMT's was specified to be ${\approx}10$~\%, which further reduced the signal obtained. \subsubsection{Data Acquisition} \hspace*{\fill} \\ \label{sec:data_acquisition} The data acqusition system performed the following three functions: \begin{enumerate} \item Digital modulation of the experimental parameters necessary for acquiring the complete set of phase and contrast measurements required to extract the eEDM, as described in section~\ref{sec:Measurement_scheme_more_detail}. \item Rapid (5~MSa/s) acquisition and storage of high-bandwidth fluorescence waveforms for the spin precession measurement. \item Monitoring and logging of experimental parameters useful for checking the experimental state and for searching for systematic errors (e.g. magnetic fields, beam source temperatures). \end{enumerate} All functions were coordinated with a LabVIEW-based software system. Data acquisition timing was controlled by a digital delay generator.\footnote{SRS DG645.} Every 20~ms, a TTL signal was produced which triggered the ablation laser Q-switch, in turn creating a pulse of molecules. Molecule fluorescence signals, measured as a PMT photocurrent, were captured on a 20-bit digital oscilloscope\footnote{National Instruments PXI-5922.}. The oscilloscope was triggered 6--7~ms after the ablation pulse, depending on the current molecule beam forward velocity, and recorded a 9~ms window of signal containing the entire molecule signal (1--2~ms) and several ms of background. The 100~kHz square wave that drove the fast polarisation switching of the state readout laser was synchronised with the 50~Hz Q-switch trigger so that the relative phase was fixed. The 5~MSa/s data rate of the oscilloscope enabled resolution of the time-dependent structure within each 5~$\upmu$s polarisation bin; this structure could vary on timescales as short as the C-state lifetime $1/\gamma_C\approx500$~ns \cite{Hess2014}. Signal waveforms, $S(t)$, were captured from two PMTs --- note that we were not counting individual photoelectrons, but instead amplified and read out a voltage proportional to the count rate. These waveforms were then transferred to the control PC where they were digitally averaged over 25 pulses to form one `trace'. The traces were then written to a hard drive. A file containing auxiliary measurements was recorded synchronously with each fluorescence trace. This file included the states of the experimental switches and other auxiliary measurements such as $\E$-field voltages, $\B$-field currents, laser power and polarisation, magnetic field measurements, molecular beam buffer gas flow rate, buffer gas cell temperature, and the temperature, pressure and humidity in our lab. This data proved useful in searching for systematic errors as described in section~\ref{sec:systematics}. \subsection{Experiment Switches} During the course of the experiment, we performed many parameter switches. Most of these switch parameter symbols are denoted by a superscript tilde $\tilde{\mathcal{X}}$, which indicates that that parameter takes on two values, $\tilde{\mathcal{X}}=\pm1$. \begin{description} \item [{$\Nsw$}] Used as a quantum number, $\Nsw\approx\Esw\rm{sgn}\left(M\Omega\right)$, for states of $\left|M\right|>0$, $\left|\Omega\right|>0$, that refers to states with opposite molecular alignment with respect to the applied electric field. It is also used to refer to the experiment switch between spectroscopically addressing states in $\left|H,J=1\right\rangle $ with opposite values of $\Nsw$. \item [{$\Esw$}] Denotes the alignment of the applied electric field with respect to the laboratory $\hat{z}$ axis, $\Esw=\rm{sgn}\left(\vec{\mathcal{E}}\cdot\hat{z}\right)$ where $\vec{\mathcal{E}}$ is the applied electric field. \item [{$\Bsw$}] Denotes the alignment of the applied magnetic field with respect to the laboratory $\hat{z}$ axis, $\Bsw=\rm{sgn}\left(\vec{\mathcal{B}}\cdot\hat{z}\right)$ where $\vec{\mathcal{B}}$ is the applied magnetic field. \item [{$\tilde{\theta}$}] Denotes the state of the polarisation dither that is used to extract the contrast in the spin precession measurement. It refers to the direction of the offset angle in the $xy$ plane of the state readout polarisation basis $\hat{X},\hat{Y}$, relative to the average polarisation of these lasers. \item [{$\Psw$}] Used as a quantum number to denote the parity (eigenvalue of the parity operator $P$) of a given molecular state of well-defined parity. It is also used to refer to the experiment switch between spectroscopically addressing states in $\left|C,J=1\right\rangle $ with opposite values of $\Psw$ with the state readout lasers. \item [{$\Lsw$}] Denotes the state of the mapping between the two output channels of the electric field voltage supply, and the two electric field plates which can be either connected normally (+1), or inverted relative to normal (-1). \item [{$\Rsw$}] Denotes the state of an experimental switch of the state readout polarisation basis offset angle with respect to the $x$-axis by either 0 $\left(+1\right)$ or $\pi/2$ $\left(-1\right)$. \item [{$\Gsw$}] Denotes the state of an experimental switch of the global polarisation; the state preparation and state readout lasers are rotated synchronously by a common angle. This can be thought of as a redefinition of the $\hat{x}$ and $\hat{y}$ axes in the $xy$ plane. \item [{$\B_{z}$}] Denotes the magnitude of the magnetic field along the $\hat{z}$ direction in the laboratory, $\B_{z}=|\vec{\B}\cdot\hat{z}|$. This parameter is switched between three values differing by about $20^{\:}\mathrm{mG}$. In figure~\ref{fig:pixel_plot}, channels $X$ that are `odd' with respect to this parameter refer to the linear variation $\partial X/\partial\mathcal{B}_{z}$. \item [{$\E$}] Denotes the magnitude of the electric field, $\E=|\vec{\E}|$. This parameter is switched between two values. \item [{$\hat{k}\cdot\hat{z}$}] Denotes the orientation of both the state preparation and the state readout laser pointing directions with respect to the laboratory $\hat{z}$ axis. This is a binary switch, $\hat{k}\cdot\hat{z}=\pm1$, but we do not denote this switch with a tilde as we do with the other binary switch parameters. \end{description} \subsection{Laser Parameters} There are a variety of laser parameters which are used to describe the state preparation laser that is denoted with a subscript `prep', or the state readout lasers that are denoted with a subscript `read' if the property applies to both state readout lasers, or with subscripts $X$ and $Y$, if the parameter can vary between the two readout lasers. \begin{description} \item [{$\hat{k}$}] Laser pointing direction. In this paper, the pointing direction is always nearly aligned or antialigned with respect to the laboratory $\hat{z}$ axis such that $\hat{k}\cdot\hat{z}\approx\pm1$. \item [{$\vartheta_{k}$}] Defined in equation~\ref{eq:pointing_imperfection}. Polar angle of deviation of the pointing $\hat{k}$ from aligned or anti-aligned with the $\hat{z}$ axis. \item [{$\varphi_{k}$}] Defined in equation~\ref{eq:pointing_imperfection}. Azimuthal angle denoting the direction in the $xy$ plane, relative to the $x$-axis, of the deviation of the pointing $\hat{k}$ from the $\hat{z}$ axis. \item [{$\hat{\epsilon}$}] Complex laser polarisation. The readout laser polarisations are also referred to as $\hat{X}$ and $\hat{Y}$ as an alternative to $\hat{\epsilon}_{X}$ and $\hat{\epsilon}_{Y}$ at some points. \item [{$\hat{\varepsilon}$}] Effective polarisation. Used to parameterize the effect of experiment imperfections on the molecule state as the polarisation vector that would be required to obtain the same molecule state in the absence of those experiment imperfections. \item [{$\theta$}] Defined in section~\ref{sec:Measurement_scheme} and equation~\ref{eq:polarization_parametrization} as the linear polarisation angle of the complex polarisation vector. \item [{$\Theta$}] Defined in section~\ref{sec:Measurement_scheme} and equation~\ref{eq:polarization_parametrization} as encoding the ellipticity of the complex polarisation vector. \item [{$S$}] Defined in section~\ref{sec:Measurement_scheme_more_detail} as the relative circular Stokes parameter, $S\equiv S_{3}/I=\cos2\Theta$. \item [{$\omega_{\rm L}$}] Laser frequency. \item [{$P$}] Laser power. \item [{$\Omega_{\rm r}$}] Rabi frequency for a particular laser beam and transition. Defined as the transition dipole matrix element multiplied by the amplitude of the electric field associated with the laser beam. \item [{$\Gamma$}] Optical retardance for some birefrigent element along the laser beam path. \item [{$\phi_{\Gamma}$}] Angle in the $xy$ plane of the fast axis associated with an optical retardance $\Gamma$. \end{description} \subsection{Molecular States and Parameters} These symbols are all used to describe the molecular energy level structure and the manner in which our laser light interacts with the molecules, in particular for the state preparation and readout processes. \begin{description} \item [{$J$}] Total angular momentum. \item [{$M$}] Projection of $J$ onto the laboratory $\hat{z}$-axis. \item [{$\Omega$}] Projection of $J$ onto the internuclear axis, $\hat{n}$. \item [{$B_H$}] Rotational constant of the $H$ state. \item [{$\Eeff$}] `Effective electric field' to which we consider the eEDM to be subjected. \item [{$\Delta_{\Omega,1}$}] The $\Omega$-doublet splitting of the $\ket{H,J=1}$ state. \item [{$D_1$}] Expectation value of the molecular electric dipole moment of the $\ket{H,J=1}$ state. \item [{$g_1$}] The $g$-factor of the $\ket{H,J=1}$ state. \item [{$\eta$}] Defined in equation~\ref{eq:eta_2}, it is proportional to the $g$-factor difference between the two $\Nsw$ states. \item [{$\left|\pm,\Nsw\right\rangle $}] Sublevels within the $\ket{H,J=1}$ (eEDM sensitive) manifold, labelled by their values of $M$ and $\Nsw$. \item [{$\left|C,\Psw\right\rangle $}] Sublevel to which molecules are excited during state preparation and readout. One of two sublevels in the $\ket{C,J=1}$ manifold, with $M=0$ and parity $\Psw=\pm1$. \item [{$\left|B(\hat{\epsilon}),\Nsw,\Psw\right\rangle $}] Superposition of $M$ sublevels within the $\ket{H,J=1,\Nsw}$ manifold that is depleted during state preparation with a laser beam of polarisation $\hat{\epsilon}$, as defined in equation~\ref{eq:bright_state}. \item [{$\left|D(\hat{\epsilon}),\Nsw,\Psw\right\rangle $}] Superposition of $M$ sublevels within the $\ket{H,J=1,\Nsw}$ manifold that remains after state preparation with a laser beam of polarisation $\hat{\epsilon}$, as defined in equation~\ref{eq:dark_state}. \item [{$\left|B_{\pm}(\hat{\epsilon}),\Nsw,\Psw\right\rangle$}] Instantaneous eigenvectors of the three-level system formed by $\left|B(\hat{\epsilon}),\Nsw,\Psw\right\rangle$, $\left|D(\hat{\epsilon}),\Nsw,\Psw\right\rangle$ and $\left|C,\Psw\right\rangle$, as defined in equation~\ref{eq:inst_eigv}. \item [{$\Delta$}] One-photon detuning from resonance, discussed in section~\ref{sec:state_prep_read} and defined in equation~\ref{eq:detuningcorrelations}. \item [{$\gamma$}] Decay rate of the a given electronic state. The electronic state label is given in the subscript. In most of the paper, only $\gamma_C$, the decay rate of the $C$ state, is relevant. \item [{$\Omega_{\rm r}$}] Transition Rabi frequency, which is proportional to the square root of the laser intensity. \item [{$E_{B\pm},^{\:}E_{D}$}] Instantaneous eigenenergies of the dressed three-level system, defined in equation~\ref{eq:inst_eig}. \item [{$\dot{\chi}$}] Complex polarisation rotation rate defined in section~\ref{sssec:AC_stark_shift_phases}. \item [{$\Pi$}] Defined and discussed in section~\ref{sssec:AC_stark_shift_phases} and equation~\ref{eq:Pi_def}. This is a factor in the AC Stark shift phase that is independent of laser polarisation but depends on the laser detuning and Rabi frequency. \item [{$v_{\parallel}$}] The mean longitudinal velocity of the molecular beam. \end{description} \subsection{Measurement Quantities} These symbols represent quantities related to the measurement of the accumulated phase and the way in which it is extracted during data analysis, as well as some related quantities pertaining to systematic studies. \begin{description} \item [{$N$}] Total number of measurments performed, equivalent to the number of detected photoelectrons. \item [{$N_0$}] Number of molecules in the state readout region in the particular $\Nsw$ level being addressed. \item [{$f$}] Fraction of fluorescence photons emitted in the state readout region that are detected. \item [{$S$}] Recorded photoelectron count rate measured on the photodetectors. \item [{$F$}] Photoelectron count rate due to the molecule fluorescence. $F_{X,Y}$ is used to denote the molecular fluorescence induced by the $X$ and $Y$ state readout lasers, respectively. $F_{\mathrm{cut}}$ is used to denote the fluorescence threshold above which data was included in the analysis. \item [{$B$}] Background count rate primarily due to scattered light from the state readout lasers. This background signal is subtracted from the raw photoelectron signal $S$ to obtain the fluorescence photoelectron count rate, $F=S-B$. \item [{$\mathcal{A}$}] Signal asymmetry as defined in equation~\ref{eq:Asymmetry}. \item [{$\mathcal{C}$}] Spin precession fringe contrast, as defined in equation~\ref{eq:Contrast_Definition}, is the sensitivity of the asymmetry to molecular spin precession. \item [{$\phi$}] Actual spin precession phase of the molecules as defined in equation~\ref{eq:total_phase}. \item [{$\Phi$}] Measured spin precession phase as described in section~\ref{sec:Measurement_scheme_more_detail}, $\Phi=\mathcal{A}/(2\mathcal{C})$. \item [{$\tau$}] Measured spin precession time as described in sections~\ref{sec:Measurement_scheme} and \ref{sec:compute_phase}. \item [{$\omega$}] Measured spin precession frequency, as defined in equation~\ref{eq:omega_def}, $\omega=\Phi/\tau$. \item [{$\chi^{2}$}] Reduced chi-squared statistic, $\chi^2=\frac{1}{N_{\rm dof}}\sum_i\left(\frac{x_i - f_i(\{x\})}{dx_i}\right)^2$, where $N_{\rm dof}$ is the number of degrees of freedom, $x_i$ are the data points, $dx_i$ are the uncertainties, and $f_i(\{x\})$ is a fit function that can depend on $i$ and the ensemble of all of the data, $\{x\}$. For normally distributed data that fits well to the applied fit function, $\chi^2$ should be consistent with 1. \item [{$\omega^{\mathcal{NE}}$}] The measurement channel of interest, the spin precession frequency channel that is correlated with $\Nsw$ and $\Esw$. The expected eEDM signal should contribute to this channel. \item [{$\omega^{\mathcal{NE}}_T$}] The contribution to spin precession frequency $\omega^{\mathcal{NE}}$ induced by $T$-odd spin precession effects in the $H$ state in ThO. \item [{$\omega^{\mathcal{NE}}_P$}] A systematic error in the $\omega^{\mathcal{NE}}$ channel that is proportional to some parameter $P$. \end{description} \section{Introduction} \input{Introduction_FINAL.tex} \section{Atom and Molecule eEDM Experiments} \subsection{Theory} \input{theory_FINAL.tex} \subsection{ThO Molecule} \input{ThO_Molecule_FINAL.tex} \section{ACME Experiment} \subsection{Overview of Measurement Scheme} \input{measurement_scheme_FINAL.tex} \subsection{Apparatus} \input{apparatus_FINAL.tex} \section{Data Analysis} \input{Data_Analysis_FINAL.tex} \section{Systematic Errors} \input{Systematics_FINAL.tex} \section{Interpretation} \input{Interpretation_FINAL.tex} \section{Summary and Outlook} \input{Conclusion_FINAL.tex} \subsubsection{Basic Measurement Scheme} \hspace*{\fill} \\ We performed a spin precession measurement, resembling previous beam-based eEDM experiments \cite{Hudson2011,Regan2002,Commins1994}, on $^{232}\rm{Th}^{16}\rm{O}$ molecules in a pulsed molecular beam generated by a cryogenic buffer gas beam source. Figure~\ref{fig:meas_scheme_simple} shows a simplified schematic of the measurement. The molecules fly at velocity $v\approx200$~m/s into a magnetically shielded region with nominally uniform and parallel electric $\vec{\E}$ and magnetic $\vec{\B}$ fields. Molecule population is transfered from $|X^1\Sigma^+,J=1,M=\pm1\rangle$ in the electronic ground state to the metastable $|H,J=1,M=\pm1,\Omega=\tilde{\N}\tilde{\E}M\rangle\equiv|\pm,\tilde{\N}\rangle$ state manifold (in the $\ket{\pm,\Nsw}$ nomenclature we use $\pm$ to refer to $M=\pm1$) by optical pumping through the short-lived $|A^3\Pi_{0^+},J=0,M=0\rangle$ state with a 943~nm laser. This results in an even distribution of population in an incoherent mixture of the four $|\pm,\Nsw\rangle$ states in $H$.\footnote{A glossary of symbols used throughout this paper is provided in section~\ref{sec:glossary}.} Figure~\ref{fig:ThOlevels} shows the electronic states of ThO relevant to the eEDM measurement. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{ThO_Levels.pdf} \caption{Levels and transitions in ThO used in our measurement of the eEDM, based on \cite{Vutha2010,Edvinsson1985,Paulovic2003}. Solid arrows indicate transitions we address with lasers, wavy arrows indicate spontaneous decays of interest. For more details on how these transitions were used, see the main text.} \label{fig:ThOlevels} \end{figure} In the absence of any experimental imperfections, we describe our system in terms of coordinate axes $+\hat{z}$ along $+\vec{\E}$ (for a specified sign of applied field that we denote as positive, pointing approximately east to west in the lab) and $+\hat{x}$ along the direction of the molecular beam (which travels approximately south to north) such that $+\hat{y}$ is approximately aligned with gravity (cf.\ figure~\ref{fig:meas_scheme_simple}). Note that when we reverse the direction of the electric field, by construction the laboratory coordinate system does not change and the orientation of the electric field can be described by $\Esw\equiv{\rm sgn}(\hat{z}\cdot\vec{\E})=\pm1$. Analogously, we reverse the direction of the magnetic field between two $\Bsw\equiv{\rm sgn}(\hat{z}\cdot\vec{\B})=\pm1$ states. Since the directions of the fields are encoded by $\Esw$ and $\Bsw$, we define the magnitudes of the fields simply as $\B_z\equiv|\B_z|$ and $\E\equiv|\vec{\E}|$. A superposition of the $M=\pm1$ sublevels is prepared by optically pumping on the transition at 1090~nm between states $|\pm,\Nsw\rangle$ and $|C^1\Pi_1,J=1,M=0\rangle(|\Omega=+1\rangle-\Psw|\Omega=-1\rangle)/\sqrt{2}\equiv|C,\Psw\rangle$, where $\Psw=\pm1$ is the excited state parity\footnote{In this paper we follow the convention given in \cite{Brown2003}.}, with laser light linearly polarised in the $xy$ plane. The resulting state corresponds to having the total angular momentum of the molecule aligned in the $xy$ plane. Because the $\sigma$ electron's spin is aligned with $\vec{J}$, by the Wigner-Eckart theorem this is equivalent to aligning the spin \cite{Budker2008}, and we use this shorthand from here on. The state preparation laser frequency is tuned to spectroscopically select the molecule alignment $\Nsw$, while the nearly degenerate $M=\pm1$ states remain unresolved. The excited state $C$, which decays at a rate $\gamma_C\approx2\pi\times0.3$ MHz, decays primarily (${\approx}75~\%$ \cite{Hess2014}) to the ground state so that one superposition of the two $|\pm,\Nsw\rangle$ states is optically pumped out of $H$ and the remaining orthogonal superposition, which is `dark' to the preparation laser beam, is the prepared state. The linear polarisation of the state preparation laser beam, $\hat{\epsilon}_{{\rm prep}}$, sets the relative coupling of each of the two $|\pm,\Nsw\rangle$ states to $\ket{C,\Psw}$ and determines the spin alignment angle of the remaining state in the laboratory frame. The bright superposition $\ket{B(\hat{\epsilon}_{\rm prep})}$ is pumped away, and the orthogonal dark superposition $\ket{D(\hat{\epsilon}_{\rm prep})}$ remains. For the moment, we consider the specific case $\Psw=+1$ and $\hat{\epsilon}_{{\rm prep}}=\hat{x}$, (the general case will be discussed in section \ref{sec:Measurement_scheme_more_detail}). In this case, the prepared state \begin{equation} \ket{\psi(t=0),\Nsw}=\frac{1}{\sqrt{2}}\left(\ket{+,\Nsw}-\ket{-,\Nsw}\right) \label{eq:initial_state} \end{equation} has the electron spin aligned along the $\hat{y}$ axis. As the molecules traverse the spin precession region of length $L=22$ cm (which takes a time $\tau\approx1$~ms), the electric and magnetic fields exert torques on the electric and magnetic dipole moments, causing the spin to precess in the $xy$ plane by angle $2\phi$; this corresponds to the state \begin{equation} |\psi(t=\tau),\Nsw\rangle=\frac{1}{\sqrt{2}}\left(e^{-i\phi}|+,\Nsw\rangle-e^{+i\phi}|-,\Nsw\rangle\right), \end{equation} where $\phi$ is given approximately by the sum of the Zeeman and eEDM contributions to the spin precession angles, \begin{equation} \phi=-(\Bsw g_1\mu_{\rm B}\B_z+\Nsw\Esw d_e\Eeff)\tau. \label{eq:simple_phase} \end{equation} The sign of the eEDM term, $\Nsw\Esw$, arises from the relative orientation between $\vec{\E}_{\rm eff}$ and the electron spin as illustrated in figure~\ref{fig:H-state}. At the end of the spin precession region, we measure $\phi$ by optically pumping on the same $H\rightarrow C$ transition with the linearly polarised state readout laser beam. The polarisation alternates rapidly between two orthogonal linear polarisations $\hat{X}$ and $\hat{Y}$, such that each molecule is subject to excitation by both polarisations as it flies through the detection region, and we record the modulated fluorescence signals $F_X$ and $F_Y$ from the decay of $C$ to the ground state at 690 nm. This procedure amounts to a projective measurement of the spin onto $\hat{X}$ and $\hat{Y}$, which are defined such that $\hat{X}$ is at an angle $\theta$ with respect to $\hat{x}$ in the $xy$ plane. To determine $\phi$ we compute the asymmetry, \begin{equation} \mathcal{A}\equiv\frac{F_X-F_Y}{F_X+F_Y}\propto\cos{[2(\phi-\theta)]}. \label{eq:asymmetry} \end{equation} We set $\B_z$ and $\theta$ such that $\phi-\theta\approx(\pi/4)(2n+1)$ for integer $n$, so that the asymmetry is linearly proportional to small changes in $\phi$ and maximally sensitive to the eEDM. A simplified schematic of the experimental procedure just described is shown in figure~\ref{fig:meas_scheme_simple}. \hspace*{\fill} \\ \begin{figure}[!ht] \centering \includegraphics[width=16cm]{meas_scheme_simple.pdf} \caption{Simplified schematic of the measurement scheme; numbers next to energy levels label $J$. \textbf{1.} Molecules in the $\ket{X,J=1}$ state are optically pumped via the $A$ state into $\ket{H,J=1}$ by a retroflected (and offset in $x$) laser beam (blue arrows into/out of page), polarised along $\hat{x}$ and $\hat{y}$ (blue arrows). \textbf{2.} Molecules from one of the $\Nsw$ states are then prepared in a superposition of $M$ sublevels ($M=-1,0,+1$ from left to right) by a linearly polarised laser beam (red) addressing the $H\rightarrow C$ transition. This aligns the molecule's angular momentum, $\vec{J}$, which in turn aligns the spin of the eEDM-sensitive $\sigma$ electron, which is on average aligned with $\vec{J}$. \textbf{3.} The angular momentum (and hence electron spin) then precesses due to the electric and magnetic fields present (into the page) by an angle $\phi$. This precession is dominated by the magnetic interaction but also includes a term linear in $d_e$ (see equation~\ref{eq:simple_phase}). \textbf{4.} The spin state is projected onto orthogonal superpositions of the $M$ sublevels by laser beams polarised along $\hat{X},\hat{Y}$ (red arrows). The resulting fluorescence is determined by the population in each superposition state and hence the precession angle $\phi$.} \label{fig:meas_scheme_simple} \end{figure} By repeating the measurement of $\phi$ after having reversed any one of the signs $\Nsw$, $\Esw$ or $\Bsw$, we may isolate the eEDM phase from the Zeeman phase. In practice, we repeat the phase measurement under all $2^3$ $(\Nsw,\Esw,\Bsw)$ experiment states to reduce the sensitivity of the eEDM measurement to other spurious phases, and we extract the phase $\phi^{\N\E}=-d_e\Eeff\tau=\phi_{\rm EDM}$. Here, we have introduced the notation $\phi^u$, discussed in detail in the next section, which we use throughout this document to refer to the component of $\phi$ that is odd under the set of switches listed in the superscript $u$, and implicitly even under those which are not listed (see section~\ref{sec:Measurement_scheme_more_detail} and equation~\ref{eq:general_parity} for a rigorous definition). A component which is even under all switches is considered to be `non-reversing' and is given an `nr' superscript. \subsubsection{Measurement Scheme in Detail} \label{sec:Measurement_scheme_more_detail} \hspace*{\fill} \\ To fully describe the method by which we extracted $d_e$ from the data in section \ref{sec:data_analysis}, and to describe the systematic error models in section \ref{sec:systematics}, we must introduce some additional formalism to describe the spin precession measurement to generalize the simple case described in the previous section. We work in the regime in which the Stark shift in $H$ is approximately linear, $E_{\rm Stark}\approx-\Nsw D_1\E$, which holds when the Stark interaction energy is large compared to the $\Omega$-doublet energy splitting $\Delta_{\Omega,1}$ but small compared to the rotational energy scale, described by the $H$-state rotational constant $B_H\approx2\pi\times$~9.8 GHz, i.e. $\Delta_{\Omega,1}\ll D_1\E\ll B_H$. In this regime, the molecular alignment is approximately related to $\Omega$ by $\Nsw=\Esw M\Omega$; this relation is assumed throughout this document. This is a good approximation, but it is notable that due to the Stark interaction at first order in perturbation theory, each $|M,\Nsw\rangle$ state is a superposition of all four $|H,J,M,\Omega\rangle$ states with $J=1,2$ and $\Omega=\pm1$. This effect is discussed further in sections~\ref{sssec:correlated_laser_parameters} and \ref{sssec:laser_pointing_and_intensity}. Let us consider the preparation of a spin-aligned state again. Starting from an incoherent mixture of the four $|\pm,\Nsw\rangle$ states, we perform optical pumping on the electric dipole transition between $\ket{\pm,\Nsw}$ and $\ket{C,\Psw}$, for a specific $\Nsw$, with laser light of polarisation $\hat{\epsilon}_{{\rm prep}}$ that is nominally linear in the $xy$ plane. This step depletes the bright superposition state (see e.g. \cite{Bickman2009}) \begin{equation} \ket{B(\hat{\epsilon}_{{\rm prep}},\Nsw,\Psw)}=(\hat{\epsilon}_{+1}^{*}\cdot\hat{\epsilon}_{{\rm prep}}^{*})\ket{+,\Nsw}-\Psw(\hat{\epsilon}_{-1}^{*}\cdot\hat{\epsilon}_{{\rm prep}}^{*})\ket{-,\Nsw}, \label{eq:bright_state} \end{equation} where $\hat{\epsilon}_{\pm1}=\mp\left(\hat{x}\pm i\hat{y}\right)/\sqrt{2}$ are unit vectors for circular polarisation. The corresponding dark state (with which the laser does not interact) is the orthogonal superposition \begin{equation} \ket{D(\hat{\epsilon}_{{\rm prep}},\Nsw,\Psw)}=(\hat{\epsilon}_{+1}^{*}\cdot\hat{\epsilon}_{{\rm prep}})\ket{+,\Nsw}+\Psw(\hat{\epsilon}_{-1}^{*}\cdot\hat{\epsilon}_{{\rm prep}})\ket{-,\Nsw}. \label{eq:dark_state} \end{equation} This dark state serves as the initial state, $|\psi(0),\Nsw\rangle = |D(\hat{\epsilon}_{{\rm prep}},\Nsw,\Psw=+1)\rangle$, for the spin-precession experiment, where we fixed the state preparation laser frequency to address the excited state with parity $\Psw=+1$. The state preparation laser polarisation can be parameterised as \begin{equation} \hat{\epsilon}_{{\rm prep}}=-e^{-i\theta_{{\rm prep}}}\cos\Theta_{{\rm prep}}\hat{\epsilon}_{+1}+e^{+i\theta_{{\rm prep}}}\sin\Theta_{{\rm prep}}\hat{\epsilon}_{-1}, \label{eq:polarization_parametrization} \end{equation} where $\Theta_{{\rm prep}}\approx\pi/4$ defines the ellipticity Stokes parameter $(S_3/I)_{{\rm prep}}=\cos2\Theta_{{\rm prep}}\approx0$, and $\theta_{{\rm prep}}$ defines the linear polarisation angle with respect to $\hat{x}$ in the $xy$ plane. From here on, we refer to the ellipticity Stokes parameter as $S\equiv S_3/I$. There is a one-to-one correspondence between the dark state superposition and the projection of the laser polarisation $\hat{\epsilon}_{{\rm prep}}$ onto the $xy$ plane. If the laser polarisation does not lie entirely in the $xy$ plane, equations \ref{eq:bright_state} and \ref{eq:dark_state} are still appropriate, but require normalization. Note that if the laser is linearly polarised, switching the excited state parity $\tilde{\mathcal{P}}$ has the same effect on the dark state as rotating the laser polarisation angle by $\pi/2$. Following the initial state preparation, the molecules traverse the spin-precession region with their forward velocity nominally along $\hat{x}$. In this region there are nominally uniform and parallel electric ($\vec{\E}$) and magnetic ($\vec{\B}$) fields, which produce energy shifts given by \begin{equation} E(M,\Nsw) =-|M|D_1\E\Nsw-Mg_1\mu_{\rm{B}}\B_z\tilde{\B}-M\eta\mu_{\rm{B}}\E\B_z\Nsw\tilde{\B}-Md_e\Eeff\Nsw\tilde{\E}, \label{eq:Energy} \vspace{10pt} \end{equation} where $D_1$ is the electric dipole moment of $\ket{H,J=1}$. Here $\eta=0.79(1)$~nm/V accounts for the $\E$-dependent magnetic moment difference between the two sets of $\Nsw$ levels in $\ket{H,J=1}$ \cite{Petrov2014}, as described in section~\ref{sec:compute_phase}. The energy shift terms that depend on the sign of $M$ contribute to the spin precession angle $\phi$, which is given by: \begin{equation} \phi=\frac{1}{2}\int_0^L(E(M=+1,\Nsw)-E(M=-1,\Nsw))\frac{{\rm d}x}{v}. \label{eq:total_phase} \end{equation} This phase is dominated by the magnetic (Zeeman) interaction. The Stark shift, proportional to $|M|$, does not contribute. The state then evolves to: \begin{equation} |\psi(\tau),\Nsw\rangle = \left(e^{-i\phi}|+,\Nsw\rangle \langle +,\Nsw| + e^{+i\phi}|-,\Nsw\rangle \langle -,\Nsw|\right)|\psi(0),\Nsw\rangle, \end{equation} (recall $\ket{\psi(0),\Nsw}=\ket{D(\hat{\epsilon}_{{\rm prep}},\Nsw,\Psw=+1)}$ per equation~\ref{eq:dark_state}) and molecules enter a detection region where the state is read out by optically pumping again between the $\ket{H,J=1}$ and $\ket{C,J=1}$ manifolds. This optical pumping is performed alternately by two laser beams with nominally orthogonal linear polarisations $\hat{\epsilon}_X$ and $\hat{\epsilon}_Y$.\footnote{For convenience, the notation $\hat{\epsilon}_{X}$, $\hat{\epsilon}_{Y}$ is used interchangeably with the previously used notation $\hat{X}$, $\hat{Y}$.} These beams excite the projection of $\ket{\psi(\tau),\Nsw}$ onto the bright states \begin{equation} \ket{B(\hat{\epsilon}_X,\Nsw,\Psw)}\quad{\rm and}\quad\ket{B(\hat{\epsilon}_Y,\Nsw,\Psw)}, \end{equation} (with the same $\Nsw$ that was addressed in the state preparation optical pumping step, but with an independent choice of $\Psw$) with probability $P_{X,Y}$ respectively. In the ideal case in which all laser polarisations are exactly linear, this probability is given by \begin{equation} \label{eq:xprojection} P_{X,Y}(\phi,\theta_{{\rm prep}},\theta_{ X,Y},\Nsw,\Psw)=\left|\braket{B(\hat{\epsilon}_{X,Y},\Nsw,\Psw)|\psi(\tau),\Nsw}\right|^2=\left[1-\Psw\cos(2(\theta_{{\rm prep}}-\theta_{X,Y}+\phi))\right]/2, \end{equation} where $\theta_{X,Y}$ are the linear polarisation angles of the state readout beams, with respect to $\hat{x}$. The result is a signal that varies sinusoidally with the precession angle $\phi$. To measure these probabilities, we observe the associated modulated fluorescence signals, $F_{X,Y}=fN_0P_{X,Y}$, where $N_0$ is the number of molecules in the addressed $\Nsw$ level at the state readout region, and $f$ is the fraction of total fluorescence photons that are detected. To distinguish between molecule number fluctuations and phase variations, we normalize with respect to the former by rapidly switching the state readout laser between the two orthogonal polarisations, $\hat{\epsilon}_{X,Y}$, every 5~$\upmu$s. This is significantly quicker than fluctuations in the molecule number and is sufficiently quick that every molecule is interrogated by both polarisations (see section~\ref{sec:data_analysis} or \cite{Kirilov2013} for more details). We then form an asymmetry $\A$, which is immune to molecule number fluctuations, given by \begin{equation} \A=\frac{F_X-F_Y}{F_X+F_Y}=\Psw\cos[2(\phi-\theta)], \label{eq:Asymmetry} \end{equation} where we have assumed that the readout polarisations are exactly orthogonal, given by $\theta_X=\theta_{\rm{read}}$ and $\theta_Y=\theta_{\rm{read}}+\pi/2$, and where we have defined $\theta\equiv\theta_{\rm{read}}-\theta_{{\rm prep}}$.\footnote{Note that this reduces to equation~\ref{eq:asymmetry} for $\theta_{\rm prep}=0$ (i.e. $\hat{\epsilon}_{\rm prep}=\hat{x}$) and $\Psw=+1$.} In this equation and from now on unless otherwise noted, $\Psw$ refers to the excited state parity that is addressed by the state readout laser, not to be confused with the excited state parity addressed by the state preparation laser, which is kept fixed. The value of $\mathcal{B}_z$ and the state preparation and readout laser beam polarisations are chosen so that $|\phi-\theta|\approx\pi/4$. This corresponds to the linear part of the asymmetry fringe in equation~(\ref{eq:Asymmetry}), where $\A$ is most sensitive to, and linearly proportional to, small changes in $\phi$ (cf.\ figure~\ref{fig:fringe}). A variety of effects including imperfect optical pumping, decay from $C$ back to $H$, elliptical laser polarisation and forward velocity dispersion, reduce the measurement sensitivity by a `contrast' factor \begin{equation} \C\equiv-\frac{1}{2}\frac{\partial\A}{\partial\theta}\approx \frac{1}{2}\frac{\partial\A}{\partial\phi}, \label{eq:Contrast_Definition} \end{equation} with $|\C|\le1$. We measure this parameter by dithering $\theta=\theta^{\rm nr} + \Delta\theta\tilde{\theta}$ (where $\theta^{\rm nr}$ is the average or 'non-reversing' polarisation angle) between states of $\tilde{\theta}=\pm 1$, with amplitude $\Delta\theta=0.05$~rad. We found that typically $|\C|\approx0.94$. We then extract the measured phase, $\Phi=\mathcal{A}/(2\mathcal{C})+q\pi/4$, by normalising the asymmetry measurements according to the measured contrast --- see section~\ref{sec:data_analysis} for more details on the data analysis methods used to evaluate this quantity. In the ideal case, the measured phase matches closely with the precession phase, $\Phi\approx\phi$. However, a variety effects that are investigated closely in section~\ref{sec:systematics} lead to slight deviations between these two quantities, which can contribute to systematic errors in the measurement. Unless explicitly specified, $\C$ is assumed to be an unsigned quantity from here on. In particular, when averaging over multiple states of the experiment, $|\C|$ is used. To isolate the eEDM term from other components of the energy shift in equation~(\ref{eq:Energy}), the experiment is repeated under different conditions that are characterised by parameters whose sign is switched regularly during the experiment. The spin precession measurement is repeated for all $2^4$ experiment states defined by the four primary binary switch parameters: $\Nsw$, the molecular orientation relative to the applied electric field (changed every 0.5~s); $\Esw$, the direction of the applied electric field in the laboratory (2~s); $\tilde{\theta}$, the sign of the readout polarisation dither (10~s); and $\tilde{\mathcal{B}}$, the direction of the applied magnetic field in the laboratory (40~s). For each ($\Nsw,\Esw,\tilde{\mathcal{B}}$) state, the asymmetry $\mathcal{A}(\Nsw,\Esw,\tilde{\mathcal{B}})$, contrast $\mathcal{C}(\Nsw,\Esw,\tilde{\mathcal{B}})$, and measured phase $\Phi(\Nsw,\Esw,\tilde{\mathcal{B}})$ are determined as described earlier. The data taken under all $2^4=16$ experimental states derived from these four binary switches constitutes a `block' of data. We can write the phase $\Phi(\Nsw,\Esw,\tilde{\mathcal{B}})$ in terms of components with particular parity with respect to the experimental switches: \begin{align} \Phi(\tilde{\mathcal{N}},\tilde{\mathcal{E}},\tilde{\mathcal{B}})=&\Phi^{\mathrm{nr}}+\Phi^{\mathcal{N}}\tilde{\mathcal{N}}+\Phi^{\mathcal{E}}\tilde{\mathcal{E}}+\Phi^{\mathcal{B}}\tilde{\mathcal{B}}\nonumber\\+&\Phi^{\mathcal{NE}}\tilde{\mathcal{N}}\tilde{\mathcal{E}}+\Phi^{\mathcal{NB}}\tilde{\mathcal{N}}\tilde{\mathcal{B}}+\Phi^{\mathcal{EB}}\tilde{\mathcal{E}}\tilde{\mathcal{B}}+\Phi^{\mathcal{NEB}}\tilde{\mathcal{N}}\tilde{\mathcal{E}}\tilde{\mathcal{B}}. \label{eq:phase_parity} \end{align} We refer to these components as `switch-parity channels'. A channel is said to be odd with respect to some subset of switches (labelled as superscripts) if it changes sign when any of those switches is performed. Thus it will also change sign if an odd number of those switches is performed. It is implicitly even under all other switches. We use this general notation throughout this document to refer to correlations of various measured quantities and experimental parameters with experiment switches. To generalize, if we have $k$ binary experiment switches $(\tilde{\mathcal{S}}_{1},\tilde{\mathcal{S}}_{2},\dots,\tilde{\mathcal{S}}_{k})$ such that $\tilde{\mathcal{S}}_{i}=\pm1$, and we perform a measurement of the parameter $X(\tilde{\mathcal{S}}_{1},\tilde{\mathcal{S}}_{2},\dots,\tilde{\mathcal{S}}_{k})$ for a complete set of the $2^{k}$ switch states, then the component of $X$ that is odd under the product of switches $\left[\tilde{\mathcal{S}}_{a}\tilde{\mathcal{S}}_{b}\dots\right]$ is given by \begin{equation} X^{\mathcal{S}_{a}\mathcal{S}_{b}\dots}\equiv \frac{1}{2^{k}}\sum_{\tilde{\mathcal{S}}_{1}\dots\tilde{\mathcal{S}}_{k}=\pm1} \left[\tilde{\mathcal{S}}_{a}\tilde{\mathcal{S}}_{b}\dots\right]X\left(\tilde{\mathcal{S}}_{1},\tilde{\mathcal{S}}_{2},\dots, \tilde{\mathcal{S}}_{k}\right). \label{eq:general_parity} \end{equation} The switch parity behavior of a given component is expressed in the superscript which lists the experimental switches with respect to which the component is odd. We order the switch labels in the superscripts such that the fastest switches are listed first and the slowest switches are listed last. Some components give particularly important physical quantities. Most notably, the eEDM precession phase is extracted from the $\tilde{\mathcal{N}}\tilde{\mathcal{E}}$-correlated component of the measured phase: that is, in the ideal case $\Phi^{\mathcal{NE}}=-d_{e}\mathcal{E}_{\mathrm{eff}}\tau$. Additionally, the Zeeman precession phase is nominally given by $\Phi^{\B}=-\mu_{\rm B}g_1\mathcal{B}_z\tau$. Recall we label `non-reversing' components with an `nr' superscript. In a few cases, we drop the superscript parity because it is redundant. For example, we drop the superscript on the dominant components of the applied electric and magnetic fields, $\mathcal{E}\equiv\mathcal{E}^{\mathcal{E}}$ and $\mathcal{B}_{z}\equiv\mathcal{B}_{z}^{\mathcal{B}}$. Many other experimental parameters are also varied between blocks of data to suppress and monitor systematic errors (figure~\ref{fig:timing}). These `superblock' switches include: excited-state parity addressed by the state readout laser beams, $\Psw$ (chosen randomly after every block, with equal numbers of $\Psw=\pm1$); simultaneous change of the power supply polarity and interchange of leads connecting the electric field plates to their voltage supply, $\Lsw$ (4~blocks); a rotation of the state readout polarisation basis by $\theta_{\rm{read}}\rightarrow\theta_{\rm{read}}+\pi/2$ to interchange the roles of the $X$ and $Y$ beams, $\Rsw$ (8~blocks); and a global polarisation rotation of both state preparation and readout lasers by $\theta_{\rm{read}}\rightarrow\theta_{\rm{read}}+\pi/2$ and $\theta_{{\rm prep}}\rightarrow\theta_{{\rm prep}}+\pi/2$, $\Gsw$ (16~blocks). Additionally, the magnitude of the magnetic field, $\B_z$, was switched on the timescale of 64--128 blocks (${\sim}1$~hour), and the magnitude of the applied electric field, $\E$, and the laser propagation direction, $\hat{k}\cdot\hat{z}$, were changed on timescales of ${\sim}1$~day and ${\sim}1$~week, respectively. On these longer timescales, we also alternated between taking eEDM data under \textit{Normal} conditions, for which all experiment parameters were set to their nominally ideal values, and taking data with \textit{Intentional Parameter Variations} (IPVs), during which some experimental parameter was set to deviate from ideal so that we could monitor the size of the known systematic errors described in section \ref{sssec:correlated_laser_parameters}. We took IPV data in which we varied (a) the non-reversing electric field $\E^{\rm{nr}}$ and (b) the $\Nsw\Esw$-correlated Rabi frequency, $\Omega_{\rm r}^{\N\E}$, to measure the sensitivity of the eEDM measurement to these parameters and we varied (c) the state preparation laser detuning $\Delta_{{\rm prep}}$ to monitor the size of the residual $\E^{\rm{nr}}$. These systematic errors are discussed in more detail in sections~\ref{ssec:efields} and \ref{sssec:correlated_laser_parameters}. \begin{figure}[htbp] \centering \includegraphics[width=10cm]{Switch_Timescales.pdf} \caption[Timescales of experimental parameter switches]{A schematic of the switches performed during our experiment and the associated timescales. See the main text for a description of each of the switch parameters and a description of the distinction between the \textit{Normal} and IPV (\emph{Intentional Parameter Variation}) data types. The 15-hour run time and $|\E|$ switching timescale are approximate.} \label{fig:timing} \end{figure} The details of the data analysis required to extract the eEDM-correlated phase $\Phi^{\mathcal{NE}}$ are described in section \ref{sec:data_analysis}. A lower bound on the statistical uncertainty $\delta\Phi^{\mathcal{NE}}$ of the eEDM-correlated phase is given by photoelectron shot noise to be $\delta\Phi^{\mathcal{NE}}=1/(2|\C|\sqrt{N})$ for $N$ detected photoelectrons \cite{Khriplovich1997,VuthaThesis}. In the case where shot noise is the sole contribution, we can express the statistical uncertainty $\delta d_e$ in our measurement of the eEDM as \begin{equation} \delta\de=\delta\Phi^{\N\E}\frac{1}{\E_{\rm eff}\tau}=\frac{1}{2|\mathcal{C}|\tau\mathcal{E}_{\rm eff}\sqrt{\dot{N}T}}, \end{equation} where $\dot{N}\approx f\dot{N_0}$ is the measurement rate (equivalent to the photoelectron detection rate) and $T$ is the integration time (recall $f$ is the fraction of fluorescence photons detected and $N_0$ is the number of molecules in the addressed $\Nsw$ level). Further discussion of the achieved statistical uncertainty is presented in section~\ref{sec:data_analysis}. \subsection{eEDM Interaction} To make contact with common language in the literature about the eEDM in molecules, we first write the effective, nonrelativistic eEDM interaction in terms of an internal electric field $\vec{\mathcal{E}}_{\rm int}$. (As we will see, this is closely related, but not identical, to the effective field $\vec{\mathcal{E}}_{\rm eff}$.) We choose a convention where $\vec{\mathcal{E}}_{\rm int} = -\mathcal{E}_{\rm int} \hat{n}$. This means that the internal field vector is defined to be directed \textit{opposite} to $\hat{n}$, i.e., along the average direction of the electric field \textit{inside} the molecule (here, from positive Th ion to negative O ion) when $\mathcal{E}_{\rm int}$ is positive. We also adopt the convention that, in the $H$ state of ThO, there is an effective eEDM $\vec{d}_e^{\rm eff} = d_e\vec{S}$ (where again $S=1$ to a fair approximation). This choice appears, at first glance, to contradict the discussion in section~\ref{sec:theory}, where for a single electron we wrote $\vec{d}_e=2d_e\vec{s}$ (where $s=1/2$). However, these two definitions are in fact consistent when taking into account that in the $H~^3\Delta_1$ state of ThO only one of the two valence electrons (the one in the $\sigma$ orbital) contributes significantly to the EDM energy shift, while both electrons contribute to the total spin $S=1$. Hence, in our formulation, the molecule-frame projection $\vec{d}_e^{\rm eff}\cdot\hat{n}$ can take extreme values $\pm d_e$, as expected for a single contributing electron. (This `single contibuting electron' approximation is valid for all molecules used to date in searches for the eEDM.) We then write the effective eEDM Hamiltonian $H_{\rm EDM}^{\rm eff}$ in the standard form for interaction of an electric dipole moment with the internal electric field: \begin{equation} H_{\rm EDM}^{\rm eff}=-\vec{d}_e^{\rm eff}\cdot\vec{\mathcal{E}}_{\rm int}= +d_e \mathcal{E}_{\rm int} \vec{S}\cdot\hat{n}, \label{eq:HEDMapp1a} \end{equation} where the + sign in the final expression arises from the sign convention for $\vec{\mathcal{E}}_{\rm int}$. In eigenstates of $\Omega$, the expectation value of $H_{\rm EDM}^{\rm eff}$---that is, the energy shift $\Delta E_{\rm EDM}$ due to the eEDM---can be written as \begin{equation} \Delta E_{\rm EDM} = +d_e \mathcal{E}_{\rm int} \left\langle\Sigma\right\rangle = +d_e \left( \mathcal{E}_{\rm int} |\!\left\langle\Sigma\right\rangle\! | \right) \mathrm{sgn}\left(\langle\Sigma\rangle\right). \label{eq:HEDMapp1b} \end{equation} Now, we finally re-introduce the effective electric field $\Eeff$ used throughout the main text of this paper. This is related to the internal field introduced above, via \begin{equation} \Eeff \equiv |\!\left\langle\Sigma\right\rangle\! | \mathcal{E}_{\rm int}, \end{equation} We can then use this notation to describe the effective nonrelativistic eEDM interaction, within a given electronic state and eigenstate of $\Omega$ (and otherwise independent of molecular structure), as follows: \begin{eqnarray} \vec{\mathcal{E}}_{\rm eff} \equiv -\Eeff \hat{n}; \\ \vec{d}_e \equiv d_e \vec{S}/ | \left\langle \Sigma \right\rangle |; \\ H_{\rm EDM}^{\rm eff} = -\vec{d}_e \cdot \vec{\mathcal{E}}_{\rm eff} = +d_e \Eeff \Sigma/ | \left\langle \Sigma \right\rangle |; \\ \Delta E_{\rm EDM} = \mathrm{sgn}(\left\langle \Sigma \right\rangle) d_e \Eeff, \label{eq:HEDMapp1d} \end{eqnarray} where the sign in the last expressions arises from the defined definitions of $\Sigma$ (component of $\vec{S}$ along $\hat{n}$) and $\vec{\mathcal{E}}_{\rm eff}$ (antiparallel to $\hat{n}$). All relevant quantities are summarised pictorially in figure~\ref{fig:ACME_signs}. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{ACME_signs.pdf} \caption{Summary of sign conventions used in the ACME experiment. All vectors depict expectation values of operators defined in the text, in the states $| H, J=1, \tilde{\N},M\rangle$. Note the difference between scalar $\Omega$ and vector $\vec{\Omega}$. The figure is drawn with a negative $g$-factor, i.e. the magnetic moment $\vec{\mu}$ opposes $\vec{J}$, and with positive values of $d_e$ and $\Eeff$. Energy levels are shown in the centre of the figure --- solid lines show the Stark-shifted levels ($M=0$ levels are unaffected), dashed lines include Zeeman shifts and dotted lines include a non-zero eEDM interaction. Figure inspired by \cite{Lee2009}.} \label{fig:ACME_signs} \end{figure} In most of the theoretical literature on this subject, this energy shift is written in the unambiguous form $\Delta E_{\rm EDM} = +d_e W_d \Omega$. However, there has been no consistent definition in the literature for the relation between $W_d$ and $\Eeff$. In particular, both their relative signs and the dependence of their relative magnitude on the value of $|\Omega |$ (encompassing both the case of one- and two-electron systems) are often defined differently, or imprecisely. In our notation, the expressions above imply a general relationship between $\Eeff$ and $W_d$: \begin{equation} \Eeff = W_d \Omega \mathrm{sgn}(\left\langle \Sigma \right\rangle). \label{eq:genHEDMapp1} \end{equation} This relation is valid for systems with one or two valence electrons (in the `single contributing electron' approximation for the latter case), and regardless of the relative directions of $\vec{\Sigma}$ and $\vec{\Omega}$. Now we apply these general considerations to the specific case of the $H$ state of ThO. Here, since $\left\langle \Sigma \right\rangle \approx -\Omega$, we find that $\Eeff = -W_d$ with our conventions. Thus, the energy shifts can be written for ThO as \begin{equation} \Delta E_{\rm EDM} = -d_e\Eeff \Omega. \label{eq:HEDMapp1c} \end{equation} In our experiment, this gives rise to energy shifts, for a given direction of the laboratory electric field $\E$, given by \begin{equation} \langle H, J=1, \tilde{\N},M|H_{\rm eEDM}^{\rm eff}| H, J=1, \tilde{\N},M\rangle=-d_e\E_{\rm eff}M\tilde{\N}\tilde{\E}, \label{eq:HEDMapp2} \end{equation} since in our notation $\Omega=M\tilde{\N}\tilde{\E}$. Then, finally, the experimentally determined energy shift arising from the eEDM is \begin{eqnarray} \omega^{\N\E}_{\rm EDM} = \frac{1}{2}\frac{1}{\tilde{\N}\tilde{\E}} \left[ \langle H, J=1, \tilde{\N},M=+1|H_{\rm eEDM}^{\rm eff}| H, J=1, \tilde{\N},M=+1 \rangle \right. \nonumber \\ \hphantom{H,J=1,\N,} -\left. \langle H, J=1, \tilde{\N},M=-1|H_{\rm eEDM}^{\rm eff}| H, J=1, \tilde{\N},M=-1 \rangle \right] \nonumber \\ \hphantom{\omega^{\N\E}_{\rm EDM}} = -d_e\E_{\rm eff} . \label{eq:HEDMapp2a} \end{eqnarray} \subsection{Scalar-Pseudoscalar Nucleon-Electron Interaction} We next turn to notation describing the $T$-violating scalar-pseudoscalar (SP) interaction between a nucleon and an electron. The relativistic Hamiltonian for this interaction can be written as \begin{equation} H_{\rm SP}=i\frac{G_{\rm F}}{\sqrt{2}}(ZC_{\rm S,p}+NC_{\rm S,n})\gamma_0\gamma_5\rho_{\rm N}(\vec{r}), \end{equation} where $G_F$ is the Fermi coupling constant, $\gamma_i$ are Dirac matrices, $\rho_{\rm N}(\vec{r})$ is the normalised nuclear density, $Z(N)$ is the proton (neutron) number, and $C_{\rm S,p}$ and $C_{\rm S,n}$ are dimensionless constants which describe the interaction strength (relative to that of the ordinary weak interaction) specifically for protons and neutrons, respectively. Using the definition \begin{equation} C_{\rm S}=\frac{Z}{A} C_{\rm S,p} + \frac{N}{A} C_{\rm S,n} = \frac{Z}{A} C_{\rm S,p} + \left( 1-\frac{Z}{A}\right) C_{\rm S,n}, \end{equation} where $A=Z+N$, $C_\mathrm{S}$ represents a weighted average of the couplings to protons and neutrons, and is different for every nuclear species. However, since the ratio $Z/A$ is nearly the same for all heavy nuclei used in molecular and atomic EDM experiments (ranging only from $Z/A=0.41$ for $^{133}$Cs to $Z/A=0.39$ for $^{232}$Th), typically a common value for $C_{\rm S}$ is assumed for all experiments of this type. Thus we can write \begin{equation} H_{\rm SP}=i\frac{G_{\rm F}}{\sqrt{2}} AC_{\rm S} \gamma_0\gamma_5\rho_{\rm N}(\vec{r}). \label{eq:HPSapp1} \end{equation} In a given molecular electronic state, this gives rise to a non-relativistic, single-electron effective Hamiltonian of the form $ H_{\rm SP}^{\rm eff} = 2\vec{s}\cdot\hat{n} C_{\rm S} Y_{\rm S}$; the factor of 2 is included so that the maximal energy shifts due to this term have the simple form $\Delta E_\mathrm{SP}^\mathrm{max} = \pm C_\mathrm{S} Y_\mathrm{S}$. By analogy with our discussion of the eEDM Hamiltonian, in a molecular state with $S = 1$ and a `single contributing electron', as in the $H~^3\Delta_1$ state of ThO, we rewrite this in the form \begin{equation} H_{\rm SP}^{\rm eff} = \vec{S}\cdot\hat{n} C_{\rm S} Y_{\rm S}. \end{equation} Hence, the energy shift due to this interaction can be written as \begin{equation} \Delta E_{\rm SP} = \langle \vec{S}\cdot\hat{n} \rangle C_{\rm S} Y_{\rm S} = Y_{\rm S} \left[ \langle \Sigma \rangle / \Omega \right] \Omega, \end{equation} where the term in square brackets is a constant of the molecular state, determined by the fixed relative size and orientation of $\vec{\Sigma}$ and $\vec{\Omega}$, with value $\approx -1$ in the $H~^3\Delta_1$ state of ThO. In the literature on molecular eEDM systems, this energy shift is typically written in the simpler form \begin{equation} \Delta E_{\rm SP} = C_{\rm S} W_{\rm S}\Omega. \end{equation} Here, in our notation, $W_{\rm S}\equiv Y_{\rm S}\left[\langle\Sigma\rangle/\Omega\right]$ ($\approx -Y_{\rm S}$ in ThO). However, quantities analogous to $Y_{\rm S}$ (in terms of which the energy shifts depend explicitly on the spin direction) are rarely introduced in the literature; instead, only forms analogous to $W_{\rm S}$ (where the energies depend only on $\Omega$) are used. Our definition for $C_{\rm S}$ was historically a standard notation used in the literature. However, in some recent papers (e.g. references \cite{Skripnikov2015,Denis2016}) it is implicitly assumed that the neutron coupling $C_{\rm S,n}$ vanishes. In these papers, the factor $AC_{\rm S}$ in equation \ref{eq:HPSapp1} is replaced by $ZC_{\rm S,p}$ (or its equivalent in a different notation),\footnote{In reference \cite{Skripnikov2015} our $C_{\rm S,p}$ is denoted as $k_{\rm T,P}$ and our $W_{\rm S,p}$ as $W_{\rm T,P}$; in Ref.\ \cite{Denis2016}, our $W_{\rm S,p}$ is denoted simply as $W_{\rm S}$. References \cite{Dzuba2011,DzubaErratum2012} denote our $C_{\rm S}$ as $C^{\rm SP}$ and our $W_{\rm S}$ as $W_{\rm c}$.} and the energy shift is written in the analogous form $\Delta E_{\rm SP} = C_{\rm S,p} W_{\rm S,p}\Omega$. These papers report values of $W_{\rm S,p}$ in the $H$ state of ThO, based on sophisticated calculations of the molecular wavefunctions. However, since there is no particular reason to expect this interaction to couple more strongly to protons than to neutrons, we prefer to report our results in terms of $C_{\rm S}$. To do so, we use the relation $W_S = (A/Z)W_{\rm S,p}$ to determine $W_S$ from the reported values for $W_{\rm S,p}$. Finally, the experimentally determined energy shift arising from the nucleon-electron SP interaction is \begin{equation} \omega^{\N\E}_{\rm SP} = C_{\rm S} W_{\rm S}, \label{eq:HPSapp3} \end{equation} and the total T-violating energy shift is \begin{equation} \omega^{\N\E}_{\rm T}= -d_e\E_{\rm eff} + C_{\rm S} W_{\rm S} = d_e W_d + C_{\rm S} W_{\rm S}. \label{eq:HSP} \end{equation} Note that the sign of the $C_{\rm S}$ term is opposite to that used, incorrectly, in our original paper \cite{Baron2014}. \subsection{Relation to other notations in the literature} Table~\ref{tab:sign_convs} shows some of the conventions used in the literature to describe the $T$-violating electron-nucleon interaction in molecular systems, and how they relate to our conventions. We note in particular three key differences between the (shared) conventions of references \cite{Skripnikov2015,Denis2016}---which currently provide the most accurate values for $W_d$ and $W_S$---and ours. First: these references define $\hat{n}$ in the direction opposite to $\vec{D}$, and hence opposite to ours. This in turn means that their definition of $\Omega$ has opposite sign to ours. Hence, the same physical energy shifts (defined as $\Delta E_{\rm EDM} = W_d\Omega$ both there and here) are obtained only if we take $W_d$ to have sign opposite to that of the reported $W_d$ in these papers. Second: these references define the eEDM energy shift as $\Delta E_{\rm EDM} = +d_e\Eeff \Omega$, while we have shown that in our notation $\Delta E_{\rm EDM} = -d_e\Eeff \Omega$. Here there are two sign differences (one from the overall sign, one from the definition of $\Omega$). Hence, the same physical energy shifts are obtained when taking $\Eeff$ to have the same sign as reported in these papers. Third: these references formulate the scalar-pseudoscalar nucleon-electron interaction in terms of a quantity equivalent to our $W_{\rm S,p}$ rather than our $W_{\rm S}$. Hence we must rescale these values as described above, using $W_S = (A/Z)W_{\rm S,p}$. In addition, the same physical energy shifts $\Delta E_{\rm SP} = W_S\Omega$ are obtained only if we take $W_{\rm S,p}$ to have sign opposite to that of the reported $W_{\rm S,p}$ in these papers. \begin{center} \begin{threeparttable} \centering \begin{tabular}{C{2.9cm}C{1.1cm}C{0.6cm}C{4cm}C{4cm}} \hline & $\hat{n}$ & $\vec{\mathcal{E}}_{\rm eff}$ & $\Delta E_{\rm EDM}$ & $\Delta E_{\rm SP}$ \tabularnewline \hline ACME & \includegraphics[width=10pt]{nup.pdf} & \includegraphics[width=10pt]{adown.pdf} & $d_e W_d\Omega=-d_e \E_{\rm eff} \Omega = -\vec{d}_e\cdot\vec{\mathcal{E}}_{\rm eff}$ & $C_{\rm S}W_{\rm S}\Omega$ \tabularnewline Lee et al. \cite{Lee2009} & \includegraphics[width=10pt]{nup.pdf} & \includegraphics[width=10pt]{adown.pdf} & $-\vec{d}_e\cdot\vec{\mathcal{E}}_{\rm eff}$\tnote{a} & \tabularnewline YbF \cite{Kara2012} & \includegraphics[width=10pt]{nup.pdf} & \includegraphics[width=10pt]{adown.pdf} & $-\vec{d}_e\cdot\vec{\mathcal{E}}_{\rm eff}$\tnote{b} & \tabularnewline Kozlov et al. \cite{Kozlov1995,Kozlov2002} & \includegraphics[width=10pt]{ndown.pdf} & & $+W_dd_e\Omega$\tnote{c} & \tabularnewline Skripnikov et al. \cite{Skripnikov2013,Skripnikov2015,Skripnikov2016} & \includegraphics[width=10pt]{ndown.pdf} & & $+W_dd_e\Omega=+d_e\Eeff{\rm sgn}(\Omega)$\tnote{d} & $+W_{T,P}k_{T,P}\Omega$\tnote{e}, where $k_{T,P}=AC_{\rm S}/Z$\tnote{f} \tabularnewline Fleig et al. \cite{Fleig2014,Fleig2013,Denis2016} & \includegraphics[width=17.2pt]{bndown.pdf} & & $+W_dd_e\Omega=+d_e\Eeff[{\rm sgn}(\Omega)]$\tnote{g} & $+W_{P,T}k_S\Omega$\tnote{h}, where $k_S=AC_S/Z$ \tabularnewline Dzuba et al. \cite{Dzuba2011a,DzubaErratum2012} & \includegraphics[width=17.2pt]{bndown.pdf} & & $+W_dd_e[{\rm sgn}(\Omega)]=-d_e\Eeff[{\rm sgn}(\Omega)]$\tnote{i} & $+W_cC^{\rm SP}[{\rm sgn}(\Omega)]$\tnote{j} \tabularnewline \hline \end{tabular} \begin{tablenotes} \item[a] Reference \cite{Lee2009}, p.\ 2007 \item[b] Reference \cite{Kara2012}, p.\ 3 \item[c] Reference \cite{Kozlov1995}, above equation 6.27 \item[d] Reference \cite{Skripnikov2015}, equation 1 and following \item[e] Reference \cite{Skripnikov2015}, equation 4 \item[f] Reference \cite{Skripnikov2015}, equation 4 and \cite{Dzuba2011a}, equation 25 and following \item[g] Reference \cite{Fleig2014}, equation 1 and Reference \cite{Fleig2013} equations 2--4 \item[h] Reference \cite{Denis2015}, equations 3 and 4 \item[i] Reference \cite{Dzuba2011a}, equation 24 and table IV \item[j] Reference \cite{Dzuba2011a}, equation 25 \end{tablenotes} \par \protect\caption{Summary of the different conventions used in some of the literature relating to eEDM measurements/theory. Where entries are left blank the convention is not stated in the reference provided. Quantities in square brackets are not explicitly stated in the references but are implied. In some cases, nomenclature has been modified for consistency. Footnotes provide specific references for the equations shown.} \label{tab:sign_convs} \end{threeparttable} \end{center}
2209.15528
\section{Introduction} Many of the colloidal systems that have been used to study the glass transition are polydisperse \cite{gasser2009}. While monodisperse colloidal fluids crystallize very easily, with the introduction of a size polydispersity they become good glassformers \cite{vanmegen1993, vanmegen1994, schope2006, schope2007, pham2001, pusey2009, zaccarelli2015, brambilla2009}. As a matter of fact, the degree of polydispersity $\delta$, defined as the standard deviation of the particle diameter divided by the mean particle diameter, may strongly affect glassy dynamics. For example, for three-dimensional hard-sphere colloids, it has been shown that for moderate polydispersity $\delta < 10\%$ a dynamic freezing is typically seen for a packing fraction $\phi_{\rm g} \approx 0.58$, while for $\delta \gtrsim 10\%$, the dynamics are more heterogeneous with the large particles undergoing a glass transition at $\phi_{\rm g}$ while the small particles are still mobile (note that this result is dependent on the distribution of particle diameters) \cite{zaccarelli2015}. An interesting finding regarding the effect of polydispersity on the dynamics has been reported in a simulation study of a two-dimensional Lennard-Jones model \cite{klochko2020}. Here, Klochko {\it et al.}~show that polydispersity is associated with composition fluctuations that, even well above the glass-transition temperature, lead to a two-step relaxation of the dynamic structure factor at low wavenumbers and a long-time tail in the time-dependent heat capacity. These examples demonstrate that polydispersity and the specific distribution of particle diameters may strongly affect the static and dynamic properties of glassforming fluids. In a particle-based computer simulation, one can assign to each particle $i$ a ``diameter'' $\sigma_i$. Note that in the following the diameter of a particle does not refer to the geometric diameter of a hard sphere, but in a more general sense it is a parameter with the dimension of a length that appears in the interaction potential between soft spheres (see below). To realize a polydisperse system in the simulation of an $N$ particle system, one selects the $N$ particle diameters to approximate a desired distribution density $f(\sigma)$ with the corresponding histogram. Here, two approaches have been used in previous simulation studies. In a stochastic method, referred to as model $\mathcal{S}$ in the following, one uses random numbers to independently draw each diameter $\sigma_i$ from the distribution $f$. As a consequence, one obtains a ``configuration'' of particle diameters that differs from sample to sample. Alternatively, to avoid this disorder, one can choose the $N$ diameters in a deterministic manner, i.e.~one defines a map $(f,\,N) \mapsto (\sigma_1,\dots,\sigma_N)$, which uniquely determines $N$ diameter values. In the following, we refer to this approach as model $\mathcal{D}$. The diameters in model $\mathcal{D}$ should be selected such that in the limit $N\to\infty$ the histogram of diameters converges to $f$ as being the case for model $\mathcal{S}$. Unlike model $\mathcal{S}$, each sample of size $N$ of model $\mathcal{D}$ has exactly the same realization of particle diameters. Recent simulation studies on polydisperse glassformers have either used model $\mathcal{S}$ (see, e.g., Refs.~\cite{zaccarelli2015, klochko2020, leocmach2013, ingebrigtsen2015, ingebrigtsen2021, ninarello2017, guiselin2020overlap, vaibhav2022, lamp2022}) or model $\mathcal{D}$ schemes (see, e.g., Refs.~\cite{voigtmann2009, weysser2010, santen2001liquid}). However, a systematic study is lacking where both approaches are compared. This is especially important when one considers states of glassforming liquids at very low temperatures (or high packing fractions) where dynamical heterogeneities are a dominant feature of structural relaxation. For polydisperse systems, such deeply supercooled liquid states have only recently become accessible in computer simulations, using the Swap Monte Carlo technique \cite{tsai1978structure, grigera2001fast}. For these states, the additional sample-to-sample fluctuations in model $\mathcal{S}$ are expected to strongly affect static and dynamic fluctuations in the system, as quantified by appropriate susceptibilities. In this work, we compare a model $\mathcal{S}$ to a model $\mathcal{D}$ approach for a polydisperse glassformer, using molecular dynamics (MD) computer simulation in combination with the Swap Monte Carlo (SWAP) technique. This hybrid scheme allows to equilibrate samples at very low temperatures far below the critical temperature of mode coupling theory. We analyze static and dynamic susceptibilities and their dependence on temperature $T$ and system size $N$, keeping the number density constant. We show that in the thermodynamic limit, $N\to \infty$, the sample-to-sample fluctuations of model $\mathcal{S}$ lead to a \textit{finite} static disorder susceptibility of extensive observables. This result is numerically shown for the potential energy. Moreover, we analyze fluctuations of a time-dependent overlap correlation function $Q(t)$ via a dynamic susceptibility $\chi(t)$. At low temperatures, $\chi$ in model $\mathcal{S}$ is strongly enhanced when compared to the one in model $\mathcal{D}$. This finding indicates that it is crucial to carefully analyze the disorder due to size polydispersity when one uses a model $\mathcal{S}$ approach. In the next section \ref{sec:models}, we introduce the model for a polydisperse soft-sphere system and define the models $\mathcal{S}$ and $\mathcal{D}$. The main details of the simulations are given in Sec.~\ref{sec:simulation_details}. Then, Sec.~\ref{sec:thermodynamics} is devoted to the analysis of static fluctuations of the potential energy. Here, we discuss in detail thermal fluctuations in terms of the specific heat $C_V(T)$ and static sample-to-sample fluctuations by a disorder susceptibility. In Sec.~\ref{sec:structural_dynamics}, dynamic fluctuations of the overlap function $Q(t)$ are investigated. Finally, in Sec.~\ref{sec:conclusions}, we summarize and draw conclusions. \section{Polydisperse model system and choice of diameters} \label{sec:models} {\it Particle interactions.} As a model glassformer, we consider a polydisperse non-additive soft-sphere system of $N$ particles in three dimensions. This model has been proposed by Ninarello {\it et al.}~\cite{ninarello2017}. The particles are placed in a cubic box of volume $V=L^3$, where $L$ is the linear dimension of the box. Periodic boundary conditions are imposed in the three spatial directions. The particles have identical masses $m$ and their positions and velocities are denoted by ${\bf r}_i$ and ${\bf v}_i$, $i=1, \dots, N$, respectively. The time evolution of the system is given by Hamilton's equations of motion with Hamiltonian $H = K + U$. Here, $K = \sum_{i=1}^N {\bf p}_i^2/m$ is the total kinetic energy and ${\bf p}_i = m {\bf v}_i$ the momentum of particle $i$. Interactions between the particles are pairwise such that the total potential energy $U$ can be written as \begin{equation} U = \sum_{i=1}^{N-1} \sum_{j>i}^N u(r_{ij}/\sigma_{ij}) \, . \label{eq_potentialenergy} \end{equation} Here the argument of the interaction potential $u$ is $x=r_{ij}/\sigma_{ij}$, where $r_{ij} = |\vec{r}_i - \vec{r}_j|$ denotes the absolute value of the distance vector between particles $i$ and $j$. The parameter $\sigma_{ij}$ is related to the ``diameters'' $\sigma_i$ and $\sigma_j$, respectively, as specified below. The pair potential $u$ is given by \begin{equation} u(x) = u_0 \left(x^{-12} + c_0 + c_2 x^2 + c_4 x^4 \right) \, \Theta(x_c - x) \, , \label{eq:U_pair} \end{equation} where the Heaviside step function $\Theta$ introduces a dimensionless cutoff $x_c = 1.25$. The unit of energy is defined by $u_0$. The constants $c_0=-28 /x_c^{12}$, $c_2=48/x_c^{14}$, and $c_4=-21/ x_c^{16}$ ensure continuity of $u$ at $x_c$ up to the second derivative. We consider a polydisperse system, i.e.~each particle is allowed to have a different diameter $\sigma_i$. In the following, lengths are given in units of the mean diameter $\bar{\sigma}$, to be specified below. A non-additivity of the particle diameters is imposed in the sense that \begin{equation} \sigma_{ij} = \frac{\sigma_i + \sigma_j}{2} \left( 1 - 0.2 |\sigma_i - \sigma_j| \right) \, . \label{eq:non_additivity} \end{equation} This non-additivity has been introduced to suppress crystallization \cite{ninarello2017} which is in fact provided down to temperatures far below the critical temperature of mode coupling theory. {\it Choice of particle diameters.} The diameters $\sigma_i$ of the particles are chosen according to two different protocols. In model $\mathcal{S}$, each diameter is drawn independently from the same probability density $f(\sigma)$. In model $\mathcal{D}$, the diameters for a system of size $N$ are chosen in a deterministic manner such that their histogram approximates $f$ in the limit $N\to \infty$. As in Ref.~\cite{ninarello2017}, we consider a function $f(\sigma)\sim \sigma^{-3}$. In the case of an additive hard-sphere system, this probability density ensures that within each diameter interval of constant width the same volume is occupied by the spheres. \textit{Model $\mathcal{S}$.} For model $\mathcal{S}$, particle diameters $\sigma_i$ are independently and identically distributed, each according to the same distribution density \begin{equation} f(\sigma) = A \sigma^{-3} \mathbf{1}_{[\sigma_{\rm m}, \sigma_{\rm M}]}(\sigma) \,. \label{eq_fsigma} \end{equation} Here $\mathbf{1}_B(\sigma)$ denotes the indicator function, being one if $\sigma \in B$ and $0$ otherwise. The normalization $\int f(\sigma)\;\text{d}\sigma = 1$ is provided by the choice $A = 2 / (\sigma_{\rm m}^{-2} - \sigma_{\rm M}^{-2})$. We define the unit of length as the expectation value of the diameter, \begin{equation} \bar{\sigma} = \int \sigma f(\sigma) ~\text{d}\sigma \, , \label{eq:sigma_expectation} \end{equation} which implies $\sigma_{\rm M} = \sigma_{\rm m} /(2 \sigma_{\rm m} - 1)$. We set the lower diameter bound to $\sigma_{\rm m} = 29/40 = 0.725$. Thus, the upper bound is given by $\sigma_{\rm M} = 29/18 = 1.6\overline{1}$ and the amplitude in Eq.~(\ref{eq_fsigma}) is $A = 29/22 = 1.3\overline{18}$. Note that the ratio $\sigma_{\rm m}/\sigma_{\rm M} = 20/9 = 2.\overline{2}$, chosen in this work, deviates by less than 0.24\% from the values $2.219$ and $2.217$ reported in Refs.~\cite{ninarello2017} and \cite{RFIM_in_glassforming_liquid}, respectively. The degree of polydispersity $\delta$ can be defined via the equation $\delta^2 = \int (s - \bar{\sigma})^2 f(s) \text{d}s/\bar{\sigma}^2$ and has the value $\delta \approx 22.93\%$ in our case. In practice, random numbers $\sigma$ following a distribution $f$ can be generated from a uniform distribution on the interval $[0,1]$ via the method of inversion of the cumulative distribution function (CDF). The CDF is defined as \begin{equation} F(\sigma) = \int_{-\infty}^\sigma \, f(s) \;\text{d}s \, . \label{eq:CDF_sigma} \end{equation} Its codomain is the interval $[0,1]$. Now the idea is to use a uniform random number $Y \in [0,1]$ to select a point on the codomain of $F$. Then, via the inverse of the CDF, $F^{-1}: [0,1] \to [\sigma_{\rm m}, \sigma_{\rm M}]$, one can map $Y$ to the number \begin{equation} \sigma = F^{-1}(Y) = \left(\frac{1}{\sigma^2} - \frac{2}{A} Y \right)^{-1/2} \, , \label{eq_sigmai} \end{equation} which follows the distribution $f$ as desired. The empirical CDF, $F_N$, associated with a sample of $N$ diameter values, reads \begin{equation} F_N(\sigma) = N^{-1} \sum_{i=1}^N \mathbf{1}_{\left( -\infty, \sigma \right]}(\sigma_i). \label{eq:CDF_empirical_sigma} \end{equation} Since for model $\mathcal{S}$ the diameters $\sigma_i$ are independently and identically distributed according to the CDF $F$, the following relation holds for all $\sigma \in \mathbb{R}$, \begin{equation} \lim_{N \to \infty} F_N^{\mathcal{S}}(\sigma) \stackrel{\textit{almost surely}}{=} F(\sigma) \,. \label{eq:CDF_S_convergence} \end{equation} This follows from the strong law of large numbers. \textit{Additive packing fraction.} To a hard-sphere sample with particle diameters $\sigma_i$, $i=1, \dots, N$, one can assign the additive hard-sphere packing fraction \begin{equation} \phi_\mathrm{hs} = \frac{1}{V} \sum_{i=1}^{N} \frac{\pi}{6} \sigma_i^3. \label{eq_phihs} \end{equation} For model $\mathcal{S}$, the value of $\phi_{\rm hs}$ fluctuates among independent samples of size $N$ around the expectation value \begin{equation} \phi^\infty_\mathrm{hs} := \mathrm{E}^\mathcal{S}[\phi_{\rm hs}] = \frac{\pi n}{6} A \left( \sigma_{\rm M} - \sigma_{\rm m} \right) \approx 0.612 \,. \label{eq:expectation_packing_fraction} \end{equation} Here $n=N/V$ is the number density and the expectation $\mathrm{E}^\mathcal{S}[\,.\,]$ is calculated with respect to the diameter distribution $\prod_{i=1}^{N} f(\sigma_i)$ on the global diameter space. The variance of $\phi_{\rm hs}$ can be written as \begin{equation} \mathrm{Var}^\mathcal{S}(\phi_{\rm hs}) = N^{-1} \left(\frac{\pi n}{6}\right)^2 \mathrm{Var}^\mathcal{S}(\sigma^3) \, , \label{eq:fluctuations_phi} \end{equation} where $\mathrm{Var}^\mathcal{S}(\sigma^3)$ is the variance of $\sigma_i^3$ for a single particle. The fluctuations $\mathrm{Var}^\mathcal{S}(\phi_{\rm hs}) \propto N^{-1}$ vanish for $N\to\infty$. Beyond that, the disorder susceptibility \begin{equation} \chi_\mathrm{dis}^\mathcal{S}[\phi_\mathrm{hs}] = N \mathrm{Var}^\mathcal{S}(\phi_{\rm hs}) = \textit{Const} > 0 \end{equation} is constant and finite for model $\mathcal{S}$. In Sec.~\ref{Section_disorder_fluctuations}, the disorder fluctuations for model $\mathcal{S}$ will be discussed and analyzed in more depth. Note that $\phi_\mathrm{hs}$ is not an appropriate measure for a non-additive polydisperse model that we use in our work. Therefore, later on, we will define an effective packing fraction $\phi_\mathrm{eff}$ to account for non-additive particle interactions. \begin{figure} \includegraphics{fig1a.pdf} \includegraphics{fig1b.pdf} \caption{a) Histogram of $N = 500$ particle diameters $\sigma_i$ of models $\mathcal{S}$ (blue) and $\mathcal{D}$ (red), respectively. For model $\mathcal{S}$ a single realization is shown, where each $\sigma_i$ is drawn independently from the density $f(\sigma)$ (green). In both histograms $70$ bins are used. The vertical arrows indicate the minimum and maximum diameters, $\sigma_{\rm m}$ and $\sigma_{\rm M}$, respectively. b) Cumulative distribution function (CDF) $F$ (green) and empirical CDF $F_N^\mathcal{D}$ for model $\mathcal{D}$ (red) as a function of diameter $\sigma$ for the example $N=10$. The diameters $\sigma_i$ are constructed from Eqs.~(\ref{eq_hi}-\ref{eq:poly_dispersity_diameters_D}), as graphically illustrated for $\sigma_6$. \label{fig1}} \end{figure} \textit{Model $\mathcal{D}$.} For model $\mathcal{D}$, we also use the CDF $F$ to obtain the particle diameters $\sigma_i$, $i= 1, \dots, N$, but now we generate them in a deterministic manner. Our upcoming construction will satisfy the following three conditions: \begin{enumerate} \item The construction is deterministic. The system size $N$ uniquely defines the diameters, \begin{equation} N~\mapsto~\sigma_1, \dots, \sigma_N. \end{equation} \item Convergence: The empirical CDF $F_N^\mathcal{D}$ approximates $F$. The convergence is uniform, \begin{equation} \lim_{N \to \infty} F_N^{\mathcal{D}} \stackrel{\textit{uniform}}{=} F \,. \end{equation} Thus the models $\mathcal{S}$ and $\mathcal{D}$ are consistent. \item Constraint: For a given one-particle property $\theta(\sigma)$ of the diameter, the following constraint is fulfilled: \begin{equation} \frac{1}{N} \sum_{i=1}^{N} \theta(\sigma_i) = \mathrm{E}^\mathcal{S}[\,\theta\,]. \label{eq:constraint_model_D} \end{equation} This means that the empirical mean of the function $\theta(\sigma_i)$ equals the corresponding expectation $E^\mathcal{S}[\,\theta(\sigma_i)\,]$ in model $\mathcal{S}$. To ensure this, $\theta$ is required to be a strictly monotonic function in $\sigma$. \end{enumerate} For our work, we use $\theta(\sigma) = \frac{\pi}{6} \sigma^3$, inspired by the additive hard-sphere packing fraction, cf.~Eq.~(\ref{eq_phihs}). Here, Eq.~(\ref{eq:constraint_model_D}) ensures that $\phi_\mathrm{hs}$ has the same value for any $N$, \begin{equation} \phi_\mathrm{hs}^\mathcal{D} = \mathrm{E}^\mathcal{S}[\phi_\mathrm{hs}] \equiv \phi_\mathrm{hs}^\infty. \end{equation} So, how do we define the $N$ diameters $\sigma_i$ in the framework of model $\mathcal{D}$? First, we introduce $N+1$ equidistant nodes along the the codomain of $F$, \begin{align} h_i = i/N, && i=0, \dots, N. \label{eq_hi} \end{align} Their pre-images $s_i$ are found on the domain of $F$, \begin{equation} s_i = F^{-1}(h_i) \label{eq:poly_dispersity_diameters_D_si} \, . \end{equation} We then define particle diameters $\sigma_i$, $i=1, \dots, N$, via \begin{equation} \theta(\sigma_i) = N \int_{s_{i-1}}^{s_i} \theta(\sigma) f(\sigma)\,\mathrm{d}\sigma \, . \label{eq:poly_dispersity_diameters_D} \end{equation} Since $\theta$ is assumed to be strictly monotonic, its inverse $\theta^{-1}$ exists and $\sigma_i$ is uniquely defined by Eq.~(\ref{eq:poly_dispersity_diameters_D}). By summing over $i$ the constraint Eq.~(\ref{eq:constraint_model_D}) is fulfilled. The proof of the uniform convergence $\lim_{N \to \infty} F_N^{\mathcal{D}} = F$ is presented in Appendix~\ref{app:Convergence_CDF_D}. Note the analytical nature of the convergence for model $\mathcal{D}$ in contrast to the stochastic one for model $\mathcal{S}$, cf.~Eq.~(\ref{eq:CDF_S_convergence}). Equation~(\ref{eq:poly_dispersity_diameters_D}) with the choice $\theta(\sigma) = \frac{\pi}{6} \sigma^3$ is a sensible constraint for an additive hard-sphere system. For our non-additive soft-sphere system it is a minor tweak and not an essential condition. Another reasonable choice would be $\theta(\sigma) = \sigma$, which ensures that the empirical mean of the diameters exactly equals the unit of length $\bar{\sigma}$. Alternatively, one could ignore the constraint Eq.~(\ref{eq:constraint_model_D}) and thus also Eq.~(\ref{eq:poly_dispersity_diameters_D}) entirely and define $\sigma_i = s_i$ via Eq.~(\ref{eq:poly_dispersity_diameters_D_si}) -- note that one obtains $N+1$ diameters in this case. The latter approach was used in Ref.~\cite{santen2001liquid}. We expect that all these options are equivalent in the limit $N \to \infty$. Figure \ref{fig1}a illustrates the distribution of diameters for the models $\mathcal{S}$ and $\mathcal{D}$. In each case, we show one histogram for $N=500$ particles, in comparison to the distribution density $f$. For a meaningful comparison, we have chosen the same number of 70 bins for both histograms. Since model $\mathcal{S}$ is of stochastic nature, we show the histogram for a single realization of diameters. In contrast, for model $\mathcal{D}$ the histogram at a given $N$ and bin number is uniquely defined (assuming an equidistant placement of bins on $[\sigma_\mathrm{m},\sigma_\mathrm{M}]$). The fluctuations around $f$ for model $\mathcal{S}$ appear to be larger than for $\mathcal{D}$. In the following paragraph ``Order of convergence'', we put this finding on an analytical basis. Figure~\ref{fig1}b illustrates the construction of diameters $\sigma_i$ for model $\mathcal{D}$, based on the CDF $F$, for a small sample size $N=10$. For the resulting diameters the empirical CDF $F_N^\mathcal{D}$ is shown. {\it Order of convergence.} Having established the convergence $\lim_{N\to\infty} F_N = F$ for models $\mathcal{S}$ and $\mathcal{D}$, we now compare their order of convergence. To this end, we calculate $\Delta F$, defined as the square-root of the mean squared deviation between $F_N$ and $F$, \begin{equation} \Delta F = (\mathrm{E}[(F_N - F)^2])^{1/2} \, . \label{eq:Delta_F} \end{equation} Here, $\mathrm{E}[\,.\,]$ refers to the expectation with respect to the global diameter distribution. For model $\mathcal{D}$, the expectation $\mathrm{E}[\,.\,]$ is trivial and we obtain $\Delta F^\mathcal{D} = |F_N^\mathcal{D} - F|$. As shown in the Appendices \ref{app:Convergence_CDF_D} and \ref{app:CDF_order_of_convergence}, the results for model $\mathcal{D}$ and $\mathcal{S}$ are respectively \begin{align} \Delta F^\mathcal{D} &\leq N^{-1}\,,\\ \Delta F^\mathcal{S} &= \left((F(1-F)\right)^{1/2} N^{-1/2}\,. \end{align} This means that the order of convergence for model $\mathcal{D}$ is at least $1$, in contrast to model $\mathcal{S}$ where the order is only $1/2$. In this aspect, model $\mathcal{D}$ is superior to model $\mathcal{S}$, since its diameter distribution approaches the thermodynamic limit faster. Numerically, from the equations above, one has $\max_\sigma \Delta F^\mathcal{D} \leq \max_\sigma \Delta F^\mathcal{S}$ already for $N \geq 4$. \section{Simulation details} \label{sec:simulation_details} Depending on the protocols introduced below, different particle-based simulation techniques are used, among which are molecular dynamics (MD) simulations, the Swap Monte Carlo (SWAP) method, and the coupling of the system to a Lowe-Andersen thermostat (LA). In the MD simulations, Newton's equations of motion are numerically integrated via the velocity form of the Verlet algorithm, using a time step of $\Delta t = 0.01\,t_0$ (with $t_0 = \bar{\sigma} \sqrt{m/u_0}$ setting the unit of time in the following). We employ the SWAP method in combination with MD simulation~\cite{berthier2019efficient}. To this end, every 25 MD steps, $N$ trial SWAP moves are performed. In a single SWAP move, a particle pair $(i,j)$ is randomly selected, followed by the attempt to exchange their diameters $(\sigma_i,\sigma_j)$ according to a Metropolis criterion. The probability $P_\mathrm{SWAP}$ to accept a SWAP trial as a function of $T$ is shown in Fig.~\ref{fig2}. It indicates that even deep in the glassy state (far below the glass-transition temperature $T_{\rm g}^\mathrm{SWAP} \approx 0.06$, which we will define later on), the acceptance rate for a SWAP move is still $\gtrsim 4\%$ for $T \geq 0.01$. The latter is the lowest temperature shown here. \begin{figure} \includegraphics{fig2.pdf} \caption{Acceptance rate $P_\mathrm{SWAP}$ of diameter exchange trials as a function of temperature $T$. \label{fig2}} \end{figure} During the equilibration protocols, in each step, we couple the system to a Lowe-Andersen thermostat~\cite{Lowe_Andersen_T_different_masses} for identical masses $m$ to reach a target temperature $T$: For each particle pair $(i,j)$ closer than a cutoff $R_\mathrm{T}$ and with a probability $\Gamma \Delta t$ new velocities are generated as \begin{equation} {\bf v}_{i/j}^\mathrm{new} = {\bf v}_{i/j} \pm \frac{1}{2}\left( \zeta \sqrt{\frac{2 k_B T}{m}} - ({\bf v}_i -{\bf v}_j)\cdot \hat{\bf r}_{ij} \right) \hat{\bf r}_{ij}, \end{equation} where $\hat{\bf r}_{ij} = \mathbf{r}_{ij}/|\mathbf{r}_{ij}|$ and $\zeta$ is a normally distributed variable with expectation value of $0$ and variance of $1$. This means that only the component of the relative velocity parallel to $\hat{\bf r}_{ij}$ is thermalized, preserving the momentum as well as the angular momentum. We choose $R_\mathrm{T} = x_c$ and $\Gamma = 4$. Both for model $\mathcal{S}$ and model $\mathcal{D}$, we consider different system sizes $N = 256$, $500$, $1000$, $2048$, $4000$, and $8000$ particles at different temperatures $T$, respectively. In each case, we prepare $60$ independent configurations as follows: The initial positions are given by a face-centered-cubic lattice (with cavities in case that $N \neq 4 k^3$ for all integers $k$), while the initial velocities have a random orientation with a constant absolute value according to a high temperature $T = 5$. The total momentum is set to $\bf{0}$ by subtracting $\sum_{i} {\bf v}_i /N$ from the velocity of each particle. The initial crystal is melted for a simulation time $t_\text{max} = 2000$ with $\Delta t = 0.001$, applying both the SWAP Monte Carlo and the LA thermostat. Then we cool the sample to $T = 0.3$ for the same duration, followed by a run with $\Delta t = 0.01$ over the time $t_\text{max} = 10^5$ to fully equilibrate the sample at the target temperature $T$. After that we switch off SWAP (to ensure that the mean energy remains constant in the following) and measure a time series $H(t_j)$ of the total energy over a time span of $0.75\,t_\text{max}$. Then we calculate the corresponding mean $H_\mathrm{av}$ and the standard deviation $\mathrm{sd}(H)$, and as soon as the condition $|H(t) - H_\mathrm{av}| < 0.01\,\mathrm{sd}(H)$ is met, we switch off the LA thermostat and perform a microcanonical $NVE$ simulation for the remaining time up to $t=t_\text{max}$. This procedure reduces fluctuations in the final temperature $T$ for subsequent $NVE$ production runs. For the analysis that we present in the following, we mostly compare $NVE$ with SWAP production runs (in both cases without the LA thermostat). Also, we perform MD production runs with the coupling to the LA thermostat but without applying the SWAP, and accordingly refer to these runs as the LA protocol. For all of these production runs, the initial configurations are the final samples obtained from the equilibration protocol described above. For the LA thermostat and the SWAP Monte Carlo, pseudorandom numbers are generated by the \textit{Mersenne Twister} algorithm~\cite{matsumoto1998mersenne}. For each sample, a different seed is chosen to ensure independent sequences. For an observable we eventually determine its $95\%$ confidence interval from its empirical CDF, which is calculated via Bootstrapping~\cite{efron1992bootstrap} with $1000$ repetitions. \section{Static fluctuations} \label{sec:thermodynamics} In the following two subsections ``Thermal fluctuations'' and ``Disorder fluctuations'', we consider two kinds of fluctuations. Thermal fluctuations quantify \textit{intrinsic fluctuations of phase-space variables for a given diameter configuration}. These intrinsic observables are expected to coincide for both models $\mathcal{S}$ and $\mathcal{D}$, provided that $N$ is sufficiently large. As an example, we study thermal energy fluctuations, as quantified by the specific heat (here, numerical results are only shown for model $\mathcal{D}$). Below, we use this quantity to determine the glass-transition temperatures for the different dynamics. In model $\mathcal{S}$, the dependence of thermally averaged observables on the diameter configuration leads to sample-to-sample fluctuations that are absent in model $\mathcal{D}$. We measure these fluctuations in terms of a disorder susceptibility, exemplified via the potential energy. \subsection{Thermal fluctuations} Let us consider an $N$ particle sample of our system. An observable $O$ that characterizes the state of this sample depends in general on the particle coordinates $r=({\bf r}_1, \dots, {\bf r}_N$), the momenta $p=({\bf p}_1, \dots, {\bf p}_N)$, and the particle diameters $\sigma = (\sigma_1, \dots, \sigma_N)$. When we denote the phase-space configuration by $q=(r,p)$, we can write the observable as $O = O(q,\sigma)$. Its thermal average can be expressed as \begin{equation} \langle O \rangle(\sigma) = \mathrm{E}(O|\sigma) = \int O(q,\sigma) \rho(q|\sigma)~\mathrm{d}q \, , \end{equation} where $\rho( q | \sigma)$ is a \textit{conditional} phase-space density. In the case of the canonical $NVT$ ensemble, it is given by \begin{equation} \rho( q | \sigma) = Z^{-1} \exp( - H(q|\sigma)/(k_B T) ) \label{eq_condpsd} \end{equation} with $Z = \int \exp( - H(q|\sigma)/(k_B T) ) ~\mathrm{d}q$ the partition function and $H=K+U$ the Hamiltonian, cf.~Sec.~\ref{sec:models}. In the simulations, we compute $\langle O \rangle(\sigma)$ via the average of an equidistant time sequence $q(t_i)$ (with $\#t_i = 5000$) over a time window $t_{\rm max} = 10^5$. This approach is valid for an ergodic system - by definition - in case sufficient sampling is ensured. Then, the result \textit{does not} depend on the initial condition $q(0)$. However, it \textit{does} depend on the realization of $\sigma$ and, of course, the ensemble parameters, e.g.~the temperature $T$. Thermal fluctuations of the observable $O$ can be quantified in terms of the thermal susceptibility \begin{equation} \chi_\mathrm{thm}[O] = \mathrm{Var}(O|\sigma)/N = \langle O^2 - \langle O \rangle^2 \rangle / N \, . \label{eq:thermal_sus} \end{equation} Here the variance $\mathrm{Var}(\,.\,)$ is calculated according to the phase-space density~(\ref{eq_condpsd}). The normalization for $\chi_\mathrm{thm}$ is chosen such that for an extensive observable $O$ we expect finite values for $\lim_{N \to \infty} \chi_\mathrm{thm}[O]$. An important quantity that is related to the thermal susceptibility of the potential energy $U$ is the excess specific heat at constant volume, defined by \begin{equation} C_V = \frac{1}{N} \frac{\partial \langle U \rangle}{\partial T} \, . \label{eq:CV_derivative} \end{equation} In the canonical $NVT$ ensemble, the relation between $C_V$ and the thermal susceptibility $\chi_\mathrm{thm}^{NVT}[U]$ is \begin{equation} C_V = \chi_\mathrm{thm}^{NVT}[U]/T^2 \label{eq:CV_fluctuations_NVT} \, . \end{equation} This formula can be converted to the microcanonical $NVE$ ensemble to obtain \cite{lebowitz1967ensemble} \begin{equation} C_V = \frac{\chi_\mathrm{thm}^{NVE}[U] } {T^2 - (2/3)\chi_\mathrm{thm}^{NVE}[U]} \, . \label{eq:CV_fluctuations_NVE} \end{equation} \begin{figure} \includegraphics{fig3.pdf} \caption{Specific heat $C_V$ as a function of temperature $T$ for model $\mathcal{D}$ with $N=2048$ particles. The solid lines indicate the glass-transition temperatures, corresponding to the microcanonical MD simulations (green, $T_\mathrm{g}^{NVE} = 0.11$) and the simulations with SWAP dynamics (blue and red, $T_\mathrm{g}^\mathrm{SWAP} = 0.06$). Coupling to the LA thermostat but without SWAP is represented by the orange line. The black arrow indicates the Dulong-Petit limit, $C_V = 3/2$. \label{fig3}} \end{figure} Figure~\ref{fig3} shows $C_V$ as a function of temperature $T$ for the different dynamics, namely the microcanonical MD via Eq.~(\ref{eq:CV_fluctuations_NVE}), the MD with SWAP using Eqs.~(\ref{eq:CV_derivative}) and (\ref{eq:CV_fluctuations_NVT}), and the MD with LA thermostat employing again Eq.~(\ref{eq:CV_fluctuations_NVT}). At high temperatures, $T\gtrsim 0.11$, the specific heat $C_V$ from the different calculations is in perfect agreement. Upon decreasing $T$, one observes relatively sharp drops in $C_V$ for the microcanonical $NVE$ and the SWAP dynamics. The drops occur at the temperatures $T_\mathrm{g}^{NVE} = 0.11$ and $T_\mathrm{g}^\mathrm{SWAP} = 0.06$, respectively, and indicate the glass transition of the different dynamics. These estimates of the glass-transition temperatures $T_\mathrm{g}$ are consistent with those obtained from dynamic correlation functions presented in Sec.~\ref{sec:structural_dynamics}. Another conclusion that we can draw from Fig.~\ref{fig3} is that fluctuations in $U$, as quantified by the $C_V$ from the SWAP dynamics simulations, correctly reproduce those in the canonical $NVT$ ensemble. This can be inferred from the coincidence of the blue and the red data points at temperatures $T > T_\mathrm{g}^\mathrm{SWAP}$. For the $NVE$ dynamics at $T < T_\mathrm{g}^{NVE}$, albeit using fully equilibrated samples as initial configurations for $T > T_\mathrm{g}^\mathrm{SWAP}$, relaxation times become too large to correctly resolve the fluctuations, as quantified by $\chi_\mathrm{thm}^{NVE}[U]$. We underestimate them within our finite simulation time and effectively measure a frequency-dependent specific heat \cite{scheidler2001}. Thus, from the monotonicity of Eq.~(\ref{eq:CV_fluctuations_NVE}), $C_V$ is underestimated as well. Furthermore, from the coincidence of the green with the orange data points, corresponding to the NVE and LA dynamics, respectively, we can conclude that the LA thermostat correctly reproduces the fluctuations in the canonical $NVT$ ensemble. For the $NVE$ as well as LA dynamics, we see the Dulong-Petit law, i.e.~for $T \to 0$ the specific heat approaches the value $C_V=3/2$. An exception to this finding are the results calculated from the SWAP dynamics. This can be understood by the fact that the SWAP dynamics are associated with fluctuating particle diameters even at very low temperatures; thus the resulting dynamics cannot be described in terms of the harmonic approximation for a frozen solid. \subsection{Disorder fluctuations} \label{Section_disorder_fluctuations} In model $\mathcal{S}$, the Hamiltonian $H(q|\sigma)$ is parameterized by random variables $\sigma$ and this imposes a quenched disorder onto the system. This leads to fluctuations that can be quantified in terms of a disorder susceptibility that we shall define and analyze in this section. To this end, we first introduce the diameter distribution density for both models, \begin{align} g(\sigma) &= \begin{cases} \Pi_{i=1}^N f(\sigma_i), &\mathrm{model}~\mathcal{S},\\ \Pi_{i=1}^N \delta_\mathrm{D}(\sigma_i - \sigma_i^\mathcal{D}), &\mathrm{model}~\mathcal{D}\, , \end{cases} \label{eq:density_sigma_global} \end{align} where $\delta_\mathrm{D}$ denotes the Dirac delta function. Let us consider a variable $B = B(\sigma)$. This could be a function such as the additive hard-sphere packing fraction $\phi_{\mathrm{hs}}$ or the thermal average of a phase-space function at a given diameter configuration $\sigma$, e.g.~$\langle U \rangle$. The disorder average of $B$, denoted by $\overline{B}$, is the expectation value of $B$ with respect to the distribution density $g$, \begin{equation} \overline{B} = \mathrm{E}(B) = \int B(\sigma) g(\sigma)~\mathrm{d}\sigma \, . \end{equation} Note that in our analysis below, disorder averages are calculated by an average over all samples, i.e.~over $60$ realizations of $\sigma$. Fluctuations of an extensive quantity $B \sim N$ and its corresponding ``density'' $b = B/N$ can be measured by disorder susceptibilities, defined as \begin{align} \chi_\mathrm{dis}[B] &= \mathrm{Var}(B)/N = \overline{ B^2 - \overline B^2 } / N \,,\\ \chi_\mathrm{dis}[b] &= N \mathrm{Var}(b). \label{eq:disorder_sus} \end{align} These two different definitions have to be applied for a meaningful scaling, i.e.~to ensure $\chi_\mathrm{dis}[B] = \chi_\mathrm{dis}[b]$. For model $\mathcal{D}$, we have $\chi_\mathrm{dis}^\mathcal{D}[B] = 0$ for any $B$. In contrast, for model $\mathcal{S}$ the variable $B(\sigma)$ fluctuates from sample to sample as quantified by $\chi_\mathrm{dis}[B]$. Here, in general, $\lim_{N\to\infty}\chi_\mathrm{dis}[B] \neq 0$, as exemplified by the fluctuations of the additive packing fraction: In Sec.~\ref{sec:models}, we showed $\mathrm{Var}^\mathcal{S}(\phi_\mathrm{hs}) \propto 1/N$, and thus we have $\chi^\mathcal{S}_\mathrm{dis}[\phi_\mathrm{hs}] = \textit{Const} > 0$. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig4a.pdf} \includegraphics[width=\columnwidth]{fig4b.pdf} \caption{a) Mean potential energy $\langle U \rangle(\sigma)$ as a function of temperature $T$. For model $\mathcal{S}$, individual curves for each of the 60 samples are shown for systems with $N=256$ (blue lines) and $N=2048$ (orange lines) and for model $\mathcal{D}$ for the system with $N=256$. b) Disorder susceptibility $\chi_\mathrm{dis}[\langle U \rangle]$ for different values of $N$. \label{fig4}} \end{figure} \textit{Potential energy.} Having introduced the disorder average and susceptibility, we consider the variable $B(\sigma) = \langle U \rangle(\sigma)$, corresponding to the thermal average of the potential energy for a given sample with diameter configuration $\sigma$. In Figure~\ref{fig4}a the dependence of $\langle U \rangle(\sigma)$ on temperature $T$ is shown. For a given model and system size $N$, we present $60$ curves corresponding to $60$ independent samples. For model $\mathcal{S}$, results for $N=256$ and $2048$ are shown. Here, the diameter configurations $\sigma$ vary among the samples and thus, the potential energy fans out into various curves $\langle U \rangle(T)$. If we measure the fluctuations of the mean potential energy per particle, $\langle U \rangle(\sigma)/N$, \textit{with its variance}, the fluctuations decrease with increasing $N$, as expected. For model $\mathcal{D}$, we show the curves of $60$ independent samples at $N=256$; here, sample-to-sample fluctuations are completely absent and all data collapse onto a single curve. Figure \ref{fig4}b shows the disorder susceptibility $\chi_\mathrm{dis}[\langle U \rangle]$ of model $\mathcal{S}$ for different system sizes. As can be inferred from the figure, in a non-monotonous manner, $\chi_\mathrm{dis}[\langle U \rangle]$ seems to approach a finite temperature-dependent value in the limit $N\to \infty$, \begin{equation} \lim_{N \to \infty} \chi_\mathrm{dis}^\mathcal{S}[\langle U \rangle] = \textit{Constant}(T) > 0 \, . \label{eq:chi_dis_finite} \end{equation} \textit{Effective packing fraction.} Now, we show that the disorder fluctuations in the potential energy $\langle U \rangle (\sigma)$ and the empirical limit value for $\chi_\mathrm{dis}^\mathcal{S}[\langle U \rangle]$, as given by Eq.~(\ref{eq:chi_dis_finite}), can be explained by fluctuations in a single scalar variable, namely an effective packing fraction $\phi_\mathrm{eff}$. The additive packing fraction $\phi_{\rm hs}$, cf.~Eq.~(\ref{eq_phihs}), is not an appropriate measure of a packing fraction for the non-additive soft-sphere system that we consider in this study. Therefore, we define an effective packing fraction $\phi_\mathrm{eff}$ to take into account the non-additivity of our model system. The idea is to assign to each particle $i$ an ``average'' volume $V_i$ that accounts for the non-additive interactions. For this purpose, we first identify all $|\mathcal{N}_i|$ neighbors of $i$ within a given cutoff $r_c$, \begin{equation} \mathcal{N}_i = \left\{ ~j \in \{1,\dots,N\}~| ~j\neq i,~ r_{ij} < r_c \right\}\,. \end{equation} Here $r_c = 1.485$ is chosen, which corresponds to the location of the first minimum of the radial distribution function at the temperature $T=0.3$. Then, the volume $V_i$ of particle $i$ is defined as \begin{equation} V_i = \frac{1}{|\mathcal{N}_i|} \sum_{j \in \mathcal{N}_i} \frac{\pi}{6} \sigma_{ij}^3 \,, \end{equation} where non-additive diameters $\sigma_{ij}$ are given by Eq.~(\ref{eq:non_additivity}). Now we define an effective packing fraction $\phi_{\rm eff}$ as \begin{equation} \phi_\mathrm{eff} = V^{-1} \sum_{i=1}^{N} V_i \, . \label{eq_phieff} \end{equation} Note that different from the hard-sphere packing fraction $\phi_{\rm hs}$, the value of the effective packing fraction $\phi_{\rm eff}$ of a given sample not only depends on the diameters $\sigma_i$, but it also depends on the coordinates ${\bf r}_i$. Thus, in our simulations of glassforming liquids, it is a thermally fluctuating variable. Therefore, we will use its thermal average $\langle \phi_\mathrm{eff} \rangle$ in our analysis below. An alternative effective packing fraction can be defined by assigning an average diameter $S_i = \frac{1}{|\mathcal{N}_i|} \sum_{j \in \mathcal{N}_i} \sigma_{ij}$ instead of an average volume $V_i$ to each particle. The corresponding packing fraction is given by \begin{equation} \tilde{\phi}_\mathrm{eff} = V^{-1} \sum_{i=1}^{N} \frac{\pi}{6} S_i^3 \, . \end{equation} Below, we use the effective packing fractions $\phi_{\rm eff}$ and $\tilde{\phi}_{\rm eff}$ to analyse the sample-to-sample fluctuations in model $\mathcal{S}$. Although both definitions lead to similar results, we shall see that $\phi_{\rm eff}$ seems to provide a slightly better characterization of the thermodynamic state of the system than $\tilde{\phi}_{\rm eff}$. \begin{figure} \includegraphics{fig5.pdf} \caption{Reduced effective packing fraction $\langle \phi_\mathrm{eff} \rangle/\phi_{\rm hs}^\infty$ as a function of temperature $T$. The inset zooms into a region around $\langle \phi_\mathrm{eff} \rangle/\phi_{\rm hs}^\infty = 0.775$. \label{fig5}} \end{figure} Figure \ref{fig5} displays the temperature dependence of $\langle \phi_\mathrm{eff} \rangle$. It is almost constant over the whole considered temperature range. This is a plausible result when one considers the weak temperature dependence of the structure of glassforming liquids. As we can infer from the inset of this figure, $\langle \phi_\mathrm{eff} \rangle$ increases mildly from about $0.772$ at $T=0.3$ to about $0.779$ at $T=0.01$. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig6a.pdf} \includegraphics[width=\columnwidth]{fig6b.pdf} \includegraphics[width=\columnwidth]{fig6c.pdf} \caption{a) Scatter plot showing data points $\left(\langle \phi_{\rm eff} \rangle (\sigma)\,,\,\langle U \rangle(\sigma)/N \right)$ for model $\mathcal{S}$ at $T=0.10$ and different system sizes $N$. Each tuple belongs to a particular diameter realization $\sigma$. The red line is obtained via a linear-regression model $\phi \to \langle U \rangle$ with dependent variable $\langle U \rangle$ and regressor $\phi = \langle \phi_{\rm eff} \rangle$ for $N=2048$. Its coefficient of determination is $R^2 \approx 0.984$. b) Coefficient of determination $R^2$ of the linear regression model $\phi \to \langle U \rangle$ as a function of $T$ for $N=8000$, using $\phi = \phi_\mathrm{hs}$ (red triangles), $\langle \phi_{\rm eff} \rangle$ (brown circles), and $\langle \tilde{\phi}_\mathrm{eff} \rangle$ (orange crosses) as regressors $\phi$. c) Similar to b), but here $R^2$ as a function of $T$ is shown for regressor $\phi = \langle \phi_{\rm eff} \rangle$ only, however for different system sizes $N$. \label{fig6}} \end{figure} Now, we will use the variable $\langle \phi_\mathrm{eff} \rangle$ to quantify the sample-to-sample fluctuations of the potential energy per particle $\langle U \rangle(\sigma)/N$. In Fig.~\ref{fig6}a, we show $\langle U \rangle(\sigma)/N$ as a function of the mean packing fraction $\langle \phi_{\rm eff}\rangle(\sigma)$ at the temperature $T = 0.10$. Here, we have used the data for $N=256$, $500$, and $2048$ particles. The plot suggests that the fluctuations of $\langle U \rangle$ can be explained by the variation of $\langle \phi_{\rm eff}\rangle$. We elaborate this finding by calculating the coefficient of determination $R^2$ of a linear-regression fit with dependent variable $\langle U \rangle/N$ and regressor $\langle \phi_{\rm eff}\rangle$. In Fig.~\ref{fig6}b we show $R^2$ as a function of $T$ for the system size $N=8000$. The linear regression analysis shows that approximately $99.5\%$ of the fluctuations can be explained by $\langle \phi_{\rm eff}\rangle$. This is a striking but physically plausible result, as it shows how a reduction from $N$ degrees of freedom given by $\sigma$ to one degree of freedom given by a thermodynamically relevant parameter $\langle \phi_{\rm eff}\rangle$ is sufficient to explain nearly all of the fluctuations. Also included in Fig.~\ref{fig6}b is the coefficient of determination $R^2$ using $\phi = \phi_\mathrm{hs}$ and $\langle \tilde{\phi}_\mathrm{eff} \rangle$ as a regressor. While we obtain $R^2\approx 0.95$ for $\phi = \phi_\mathrm{hs}$, i.e.~clearly below the value for $\langle \phi_{\rm eff}\rangle$, the value of $R^2$ for $\langle \tilde{\phi}_\mathrm{eff} \rangle$ is only slightly smaller, $R^2 \approx 0.99$. Thus, among the three measures of the packing fraction, the variable $\langle \phi_{\rm eff}\rangle$ gives the best results. Note that the glass transition at $T_{\rm g}^{\rm SWAP} \approx 0.06$ is associated with a small drop of $R^2$ for the effective packing fractions. Figure \ref{fig6}c displays the temperature dependence of $R^2$ for $\langle \phi_{\rm eff}\rangle$ for different system sizes $N$. The plot indicates a significant decrease of $R^2$ with decreasing $N$, especially at low temperatures around the glass-transition temperature $T_{\rm g}^{\rm SWAP} \approx 0.06$. The reason is that a linear relationship between $\langle U \rangle(\sigma)/N$ and $\langle \phi_{\rm eff}\rangle$ is expected to only hold in the vicinity of the disorder-averaged value $\overline{\langle \phi_\mathrm{eff} \rangle}$. For small system sizes, however, relatively large nonlinear deviations from this value occur that are reflected in a lower value of the coefficient of determination $R^2$. Moreover, for small $N$, the discretized nature of the diameter configuration does not any longer allow a description in terms of a single variable such as $\langle \phi_{\rm eff}\rangle$. Our empirical results justify the idea to replace the dependency of $\langle U \rangle$ on the diameter configuration $\sigma$ by one on the single parameter $\langle \phi_{\rm eff}\rangle$, \begin{align} \langle U \rangle (\sigma) &\approx U^* \left(\langle \phi_\mathrm{eff} \rangle (\sigma) \right) \nonumber \\ &\approx U^*\left( \overline{\langle \phi_\mathrm{eff} \rangle} \right) + \frac{\partial U^* }{\partial \phi} \big|_{\phi= \overline{\langle \phi_\mathrm{eff} \rangle}} ( \langle \phi_\mathrm{eff} \rangle - \overline{\langle \phi_\mathrm{eff} \rangle} ). \label{eq_taylor_expansion_U} \end{align} Here $U^*$ is an unknown function in a scalar variable. According to the Taylor expansion above, fluctuations in $\langle U \rangle$ are inherited from those in $\langle \phi_\mathrm{eff} \rangle$ as \begin{align} \mathrm{Var}( U^*) \approx \left(\frac{\partial U^* }{\partial \phi}\right)^2 \big|_{\phi= \overline{\langle \phi_\mathrm{eff} \rangle}} \mathrm{Var}(\langle \phi_\mathrm{eff} \rangle )\, . \label{eq_variance_U} \end{align} Since $\langle \phi_\mathrm{eff} \rangle$ should scale similarly to the additive hard-sphere packing fraction $\phi_{\rm hs}$, we have $\mathrm{Var}(\langle \phi_\mathrm{eff} \rangle ) \propto 1/N$. Then, since $U^*$ is extensive, Eq.~(\ref{eq:chi_dis_finite}) is confirmed. \section{Structural Relaxation} \label{sec:structural_dynamics} In this section, the dynamic properties of the models $\mathcal{S}$ and $\mathcal{D}$ are compared. To this end, we analyze a time-dependent overlap function that measures the structural relaxation of the particles on a microscopic length scale. The timescale on which this function decays varies from sample to sample; these fluctuations around the average dynamics can be quantified in terms of a dynamic susceptibility. We shall see that the susceptibility in model $\mathcal{S}$ can be split into two terms. While the first term is due to thermal fluctuations and also present in model $\mathcal{D}$, the second term is due to the disorder in $\sigma$. At low temperatures, the contribution from the disorder can be the dominant term in the susceptibility. For our analysis, we consider MD simulations in the microcanonical ensemble as well as hybrid simulations, combining MD with the Swap Monte Carlo technique (see Sec.~\ref{sec:simulation_details}). In the following, we refer to these dynamics as ``$NVE$'' and ``SWAP'', respectively. \begin{figure*} \includegraphics{fig7a.pdf} \includegraphics{fig7b.pdf}\\ \includegraphics{fig7c.pdf} \includegraphics{fig7d.pdf}\\ \caption{Overlap $Q(t)$ as a function of time $t$ for $NVE$ (left column) and SWAP dynamics (right column) for models $\mathcal{S}$ and $\mathcal{D}$. For the selected temperatures $T$ the initial configurations are in equilibrium. Solid colored lines represent $60$ individual simulations, while black dashed lines indicate their sample average. All results correspond to systems with $N = 8000$ particles. \label{fig7}} \end{figure*} {\it Glassy dynamics.} A peculiar feature of the structural relaxation of glassforming liquids is the cage effect. On intermediate timescales, each particle gets trapped in a cage that is formed by its neighboring particles. To analyze structural relaxation from the cages, we therefore have to look at density fluctuations on a length scale $a$ similar to the size of the fluctuations of a particle inside such a cage. On a single-particle level, a simple time-dependent correlation function that measures the relaxation is the self part of the overlap function, defined by \begin{align} Q(t) = \frac{1}{N} \sum_{i=1}^{N} \Theta( a - |{\bf r}_i(t)- {\bf r}_i(0)| ) \, . \label{eq:Q_definition} \end{align} Here, we choose $a=0.3$ for the microscopic length scale. The behavior of $Q(t)$ is similar to that of the incoherent intermediate scattering function at a wave-number corresponding to the location of the first-sharp diffraction peak in the static structure factor. We note that we have not introduced any averaging in the definition (\ref{eq:Q_definition}). In the following, we will display the decay of $Q(t)$ for 60 individual samples at different temperatures. The corresponding initial configurations at $t=0$ were fully equilibrated with the aid of the SWAP dynamics before, as explained in Sec.~\ref{sec:simulation_details}. Figure \ref{fig7} shows the overlap function $Q(t)$ for model $\mathcal{S}$ and model $\mathcal{D}$, in both cases for the $NVE$ and the SWAP dynamics. In all cases, we can see the typical signatures of glassy dynamics. At a high temperature, $T=0.3$, the function $Q(t)$ exhibits a monotonous decay to zero on a short microscopic timescale. Upon decreasing the temperature first a shoulder and then a plateau-like region emerges on intermediate timescales. This plateau extends over an increasing timescale with decreasing temperature and indicates the cage effect. Particles are essentially trapped within the same microstate in which they were initially at $t=0$. At the high temperature $T=0.3$ the decay of $Q(t)$ is very similar for $NVE$ and SWAP dynamics. Towards low temperatures, however, the decay is much faster in the case of the SWAP dynamics, as expected. A striking result is that at lower temperatures, the individual curves in model $\mathcal{S}$ show much larger variation than those in model $\mathcal{D}$. In the following, these sample-to-sample fluctuations shall be quantified in terms of a dynamic susceptibility. \begin{figure} \includegraphics{fig8.pdf}% \caption{Relaxation time $\tau$, as extracted from the expectation of the overlap function, $\mathrm{E}[Q](t)$, and the time $t^* = \arg \max_{t} \chi(t)$, where the maximum of the dynamic susceptibility $\chi(t)$ occurs, for $NVE$ and SWAP dynamics. Here, a system with $N=8000$ particles is considered. \label{fig8}} \end{figure} \textit{Relaxation time $\tau$.} From the expectation of the overlap function, $\mathrm{E}[Q](t)$ (black dashed lines in Fig.~\ref{fig7}), we extract an alpha-relaxation time $\tau$, defined by $\mathrm{E}[Q](\tau) = 1/{\rm e}$. In Fig.~\ref{fig8}, the logarithm of the timescale $\tau$ as a function of inverse temperature $1/T$ is shown. Also included in this plot are the times $t^*$ where the fluctuations of $Q(t)$ are maximal, which will be discussed in the following paragraph ``Dynamic susceptibility''. One observes an increase of $\tau$ by about five orders of magnitude upon decreasing $T$. This increase is much quicker for the $NVE$ than for the SWAP dynamics, reflecting the fact that $T_{\rm g}^{\rm SWAP}$ is much lower than $T_{\rm g}^{NVE}$ (cf.~Fig.~\ref{fig3}). The glass-transition temperatures defined in Sec.~\ref{sec:thermodynamics} via the drop in the specific heat $C_V(T)$ are approximately consistent with the alternative definition via $\tau(T_{\rm g}) = 10^5$. \begin{figure*} \includegraphics{fig9a.pdf} \includegraphics{fig9b.pdf}\\ \includegraphics{fig9c.pdf} \includegraphics{fig9d.pdf}\\ \caption{Dynamic susceptibility $\chi$ as a function of time $t$ for different temperatures $T$ and systems with $N=8000$ particles. Results for all four combinations of $NVE$ and SWAP dynamics with models $\mathcal{S}$ and $\mathcal{D}$ are shown, as labeled in a)-d). Maxima of $\chi(t)$ are marked by arrows. Prior to their calculation we performed a moving average over the raw data. \label{fig9}} \end{figure*} \newpage {\textit{Dynamic susceptibility $\chi(t)$.}} A characteristic feature of glassy dynamics is the presence of dynamical heterogeneities that are associated with large fluctuations around the ``average'' dynamics. These fluctuations can be quantified in terms of a dynamic (or four-point) susceptibility. For the overlap function $Q(t)$, this susceptibility $\chi(t)$ can be defined as \begin{align} \chi(t) = N \mathrm{Var}\left(Q(t)\right) \, . \label{Eq:dynamic_sus_Q} \end{align} The function $\chi(t)$ measures the fluctuations of $Q(t)$ around the average $\mathrm{E}[Q](t)$. In practice, we use the data of $Q(t)$ from the ensemble of 60 independent samples. Figure \ref{fig9} shows the dynamic susceptibility $\chi(t)$ for the same cases as for $Q(t)$ in Fig.~\ref{fig7}. As a common feature of glassy dynamics~\cite{chandler2006, cavagna2009}, $\chi(t)$ exhibits a peak $\chi^* := \mathrm{max}_t~\chi(t)$ at $t = t^*$. The timescale $t^*$ is roughly equal to the alpha-relaxation time $\tau$, see Fig.~\ref{fig8}. At the temperatures $T=0.1$ for the $NVE$ and $T=0.06$ for the SWAP dynamics, $\chi^*$ is more than one order of magnitude larger for model $\mathcal{S}$ than for model $\mathcal{D}$. This indicates that the disorder in $\sigma$ of model $\mathcal{S}$ strongly affects the sample-to-sample fluctuations. In the following paragraph ``Variance decomposition'' we will present how one can distinguish disorder from thermal fluctuations. \begin{figure} \includegraphics{fig10a.pdf} \includegraphics{fig10b.pdf} \caption{Maximum of the dynamic susceptibility, $\chi^* = \mathrm{max}_t \chi (t)$, as a function of $1/T$ for a) the $NVE$ and b) the SWAP dynamics. Results are shown for models $\mathcal{S}$ (blue line) and $\mathcal{D}$ (red line) with $N=8000$ particles. The green solid line displays $\chi_\mathcal{S}^* - \chi_\phi^*$, i.e.~the total susceptibility minus the explained part caused by the packing-fraction fluctuations. \label{fig10}} \end{figure} Figure~\ref{fig10} shows the maximum of the dynamic susceptibility, $\chi^*$, as a function of inverse temperature, $1/T$, for $NVE$ and SWAP dynamics. In both cases, the results for model $\mathcal{S}$ ($\chi_\mathcal{S}^*$) and model $\mathcal{D}$ ($\chi_\mathcal{D}^*$) are included, considering systems with $N=8000$ particles. In all cases $\chi^*$ increases with decreasing temperature $T$, as expected for glassy dynamics. For both types of dynamics the difference $\Delta \chi^* = \chi_\mathcal{S}^* - \chi_\mathcal{D}^*$ increases with decreasing temperature as well. The lowest temperatures for which we can calculate $\Delta \chi^*$ are (i) $T= 0.09$ with a relative deviation $\Delta \chi^*/\chi^*_\mathcal{D} \approx 18$ for the $NVE$ and (ii) $T = 0.065$ with $\Delta \chi^*/\chi^*_\mathcal{D} \approx 23$ for the SWAP dynamics. {\textit{Variance decomposition}.} To understand the difference $\Delta \chi^*$ between $\chi_\mathcal{S}$ and $\chi_\mathcal{D}$, we will decompose the dynamic susceptibility $\chi_\mathcal{S}$ of model $\mathcal{S}$ into one term that stems from the thermal fluctuations of the phase-space variables, and a second term that is caused by the sample-to-sample variation of the diameters $\sigma$. As a matter of fact, in model $\mathcal{S}$ the overlap function $Q(t)$ and similar correlation functions depend on \textit{two random vectors}, namely the initial phase-space point $q_0 = \left(r(0),v(0)\right)$ \textit{and} the diameters $\sigma$. As a consequence, we define and calculate $\chi = N \mathrm{Var}(Q)$ on a probability space with respect to the joint-probability density \begin{align} \rho( q_0, \sigma) = \rho(q_0|\sigma)g(\sigma). \label{eq_joint_probability} \end{align} Here $\rho(q_0|\sigma)$ is the conditional phase-space density introduced in Eq.~(\ref{eq_condpsd}) and $g(\sigma)$ is the diameter distribution defined by Eq.~(\ref{eq:density_sigma_global}). Now, since $Q$ depends on two random vectors $q_0$ and $\sigma$, we can decompose $\chi = N \mathrm{Var}(Q)$ according to the \textit{variance decomposition formula}, also called \textit{law of total variance} or \textit{Eve's law} \cite{chung1974}: \begin{align} \mathrm{Var}(Q) &= \mathrm{E}\left[ \mathrm{Var}(Q|\sigma) \right] + \mathrm{Var}\left( \mathrm{E}[Q|\sigma] \right) \label{eq_eve1}\\ &\equiv \overline{ \langle Q^2 - \langle Q \rangle^2 \rangle } + \overline{ \langle Q \rangle^2 - \overline{ \langle Q \rangle}^2} \, . \label{eq_eve2} \end{align} Here, $\mathrm{E}\left[ \mathrm{Var}(Q|\sigma) \right]$ describes intrinsic thermal fluctuations, while the term $\mathrm{Var}\left( \mathrm{E}[Q|\sigma] \right)$ expresses fluctuations induced by the disorder in $\sigma$. The first summand in Eq.~(\ref{eq_eve1}) is expected to coincide for both models $\mathcal{S}$ and $\mathcal{D}$ for sufficiently large $N$, as $\mathrm{Var}(Q|\sigma)$ describes intrinsic thermal fluctuations for a given realization of $\sigma$, which are calculated via the \textit{model-independent} conditional phase-space density $\rho(q_0|\sigma)$. The physical observable $\mathrm{Var}(Q|\sigma)$ should not depend on microscopic details of the diameter configuration $\sigma$ for sufficiently large $N$. For the cumulative distribution functions of the diameters, the consistency equation $\lim_{N\to\infty} F_N^\mathcal{S}(s) = F(s) = \lim_{N\to\infty} F_N^\mathcal{D}(s)$ holds. Thus, we expect that $\mathrm{E}^\mathcal{S}\left[ \mathrm{Var}(Q|\sigma) \right] \approx \mathrm{E}^\mathcal{D}\left[ \mathrm{Var}(Q|\sigma) \right]$. This equation should be exact in the limit $N\to\infty$. We have implicitly used this line of argument also in Sec.~\ref{sec:thermodynamics}, where we have only shown numerical results of the specific heat for model $\mathcal{D}$. Furthermore, for model $\mathcal{D}$ we have exactly $\mathrm{E}^\mathcal{D}[\mathrm{Var}(Q|\sigma)] = \mathrm{Var}(Q|\sigma^\mathcal{D}) = \mathrm{Var}^\mathcal{D}(Q)$, since here there is only one diameter configuration $\sigma = \sigma^\mathcal{D}$ for a given system size $N$. Summarizing the results above, we can express the dynamic susceptibility for model $\mathcal{S}$ as follows: \begin{equation} \mathrm{Var}^\mathcal{S}(Q) = \mathrm{Var}^\mathcal{D}(Q) + \mathrm{Var}^\mathcal{S}\left( \mathrm{E}[Q|\sigma] \right) \label{eq_VarS_VarD_Dis}. \end{equation} Now the aim is to estimate the second summand in Eq.~(\ref{eq_VarS_VarD_Dis}). We assume that we can describe the disorder in $\sigma$ by a single parameter, namely the thermally averaged effective packing fraction $\langle \phi_{\rm eff} \rangle(\sigma)$, defined by Eq.~(\ref{eq_phieff}). This idea has been already proven successful in Sec.~\ref{sec:thermodynamics}, when we described the disorder fluctuations of the potential energy. Similarly, we write \begin{align} \mathrm{E}[Q|{\sigma}] \equiv \langle Q \rangle (\sigma) \approx Q^*({\langle \phi_{\rm eff} \rangle}(\sigma)) \, , \label{eq_var2} \end{align} assuming that the values of $\langle Q \rangle(\sigma)$, which depend on $N$ degrees of freedom, can be described by a function $Q^*$ that only depends on a scalar argument, the scalar-valued function $\langle \phi_{\rm eff} \rangle(\sigma)$. The function $Q^*$ is unknown, but can be estimated numerically with a linear-regression analysis, predicting $\langle Q \rangle$ with the regressor $\langle \phi_{\rm eff}\rangle$. Insertion of Eq.~(\ref{eq_var2}) into Eq.~(\ref{eq_VarS_VarD_Dis}) gives \begin{align} \mathrm{Var}^\mathcal{S}(Q) \approx \mathrm{Var}^\mathcal{D}(Q) + \mathrm{Var}^\mathcal{S}( Q^*(\langle \phi_{\rm eff} \rangle) ) \, . \label{eq:variance_decomp_approx1} \end{align} We can write this equation in terms of susceptibilities, \begin{align} \chi_\mathcal{S} &~\approx \chi_\mathcal{D} + \chi_\mathcal{\phi},\\ \chi_\mathcal{\phi} &:= N\mathrm{Var}^\mathcal{S}( Q^*(\langle \phi_{\rm eff} \rangle) ). \end{align} Along the lines of Eq.~(\ref{eq_variance_U}) in Sec.~\ref{sec:thermodynamics}, we can expand the overlap function $Q^*$ around $\overline{\langle \phi_{\rm eff} \rangle}$ to obtain \begin{align} \mathrm{Var}^\mathcal{S}( Q^*(\langle \phi_{\rm eff} \rangle) \approx \mathrm{Var}^\mathcal{S}(\langle \phi_{\rm eff} \rangle) \, \left(\frac{\partial Q^*(\phi ) }{\partial \phi } \big|_{\phi = \overline{\langle \phi_\mathrm{eff} \rangle}} \right)^2. \label{eq:VarQ_Varphi_inheritence} \end{align} Since $\mathrm{Var}^\mathcal{S}( \langle \phi_{\rm eff} \rangle ) \sim \mathrm{Var}^\mathcal{S}( \phi_{\rm hs} ) \propto N^{-1}$ and $Q^* \sim Q \in \mathcal{O}(1)$, this equation implies that the susceptibility $\chi_\phi$, to leading order, does not depend on $N$. Moreover, for a given temperature $T$ and time $t$, it approaches a constant value in the thermodynamic limit, $N\to \infty$. For small system sizes, however, higher-order corrections to Eq.~(\ref{eq:VarQ_Varphi_inheritence}) cannot be neglected. Beyond that, the discretized nature of the system at small $N$ will lead to a failure of the ``continuity-assumption'' (\ref{eq_var2}) itself. Finite-size effects of $\chi$ will be analyzed in the following paragraph. In Fig.~\ref{fig10}, we show for the system with $N=8000$ particles that $\chi_\phi^*$, i.e.~$\chi_\phi$ evaluated at $t = t^*$, indeed captures the sample-to-sample fluctuations in model $\mathcal{S}$ due to the disorder in $\sigma$. Both for $NVE$ and SWAP dynamics, it quantitatively describes the gap between $\chi_\mathcal{S}^*$ and $\chi_\mathcal{D}^*$. \begin{figure} \centering \includegraphics{fig11.pdf} \caption{$\chi_{\mathcal{S}}^*$ as a function of $1/T$ for different system sizes $N$ using $NVE$ dynamics. The dashed lines denote $N/4$, which is the upper bound according to \textit{Popoviciou's inequality on variances}, see Eq.~(\ref{eq:Popoviciou}). \label{fig11}} \end{figure} {\textit{Finite-size effects: Popoviciou's inequality on variances}.} Here, we analyze finite-size effects of the dynamic susceptibility $\chi$. To this end, we again consider the temperature dependence of the maximum of the dynamic susceptibility, $\chi^*$, considering only the case of the $NVE$ dynamics. Note that for model $\mathcal{D}$ finite-size effects in the considered temperature range $0.09 \le T \le 0.3$ are negligible; therefore we only discuss model $\mathcal{S}$ in the following. Figure~\ref{fig11} shows $\chi_{\mathcal{S}}^*$ as a function of $1/T$ for $N=256$, 500, and 8000. At high temperatures $T$, where fluctuations are small, there is hardly, if any, dependency on the system size $N$. However, upon lowering $T$ a saturation occurs at least for the small systems. This behavior can be understood by a \textit{hard} stochastic upper limit on fluctuations, which is given by \textit{Popoviciou's inequality on variances}~\cite{popoviciu1935equations}. This inequality is valid for \textit{any bounded} real-valued random variable $X$: Let $c$ and $C$ be the lower and upper bound of $X$, respectively, then Popoviciou states that $\mathrm{Var}(X) \leq (C^2 - c^2)/4$. Applying this result to $X=Q$ with sharp boundaries $c=0$ and $C=1$ yields \begin{align} \chi \equiv N \mathrm{Var}(Q) \leq N/4. \label{eq:Popoviciou} \end{align} Our data shows that this upper bound is quite sharp for $N=256$ and $N=500$ at low $T$. This can be understood by the fact that the \textit{equality} of the inequality~(\ref{eq:Popoviciou}) holds precisely when $Q$ is a Bernoulli variable, i.e.~when there are exactly two outcomes $Q=0$ or $Q=1$ each with probability $1/2$. In this sense, the saturation of $\chi$ should occur at temperatures $T$ and system sizes $N$ at a given $t$ when $Q(t)$ for approximately half of the samples has decayed close to $0$ while for the other half $Q$ is still close to $1$. The inequality (\ref{eq:Popoviciou}) is very useful to estimate how large a system size $N$ needs to be to avoid this kind of finite-size effect: All one has to do is to compare the measured $\chi$ at a given $N$ to the number $\chi_c := N/4$. In the case that $\chi \approx \chi_c$, one has to consider larger system sizes $N$. \section{Summary and conclusions} \label{sec:conclusions} In this work, we use molecular dynamics (MD) computer simulation in combination with the SWAP Monte Carlo technique to study a polydisperse model glassformer that has recently been introduced by Ninarello {\it et al.}~\cite{ninarello2017}. Two methods are used to choose the particle diameters $\sigma_1, \dots, \sigma_N$ to obtain samples with $N$ particles. Both of these approximate the desired distribution density $f(\sigma)\sim \sigma^{-3}$ with their histogram. In model $\mathcal{S}$ the diameters are drawn from $f(\sigma)$ in a stochastic manner. In model $\mathcal{D}$ the diameters are obtained via a deterministic scheme that assigns an appropriate set of $N$ values to them. We systematically compare the properties of model $\mathcal{S}$ to those of model $\mathcal{D}$ and investigate how the sample-to-sample variation of the diameters in model $\mathcal{S}$ affects various quantities: (i) classical phase-space functions such as the potential energy $U$ and its fluctuations, and (ii) dynamic correlation functions such as the overlap function $Q(t)$ and its fluctuations as well. Obviously, model $\mathcal{D}$ has the advantage that always ``the most representative sample''~\cite{santen2001liquid} is used for any system size $N$, while model $\mathcal{S}$ may suffer from statistical outliers, especially in the case of small $N$. This indicates that the quenched disorder introduced by the different diameter configurations in model $\mathcal{S}$ may strongly affect fluctuations that we investigate systematically in this work. Our main findings can be summarized as follows: The sample-to-sample fluctuations in model $\mathcal{S}$ can be described in terms of a single scalar parameter, namely the effective packing fraction $\langle \phi_{\rm eff} \rangle(\sigma)$, defined by Eq.~(\ref{eq_phieff}). In terms of this parameter, one can explain the disorder fluctuations of the potential energy (cf.~Fig.~\ref{fig6}) as well as the gap between the dynamic susceptibilities of models $\mathcal{S}$ and $\mathcal{D}$ (cf.~Fig.~\ref{fig10}). The sample-to-sample fluctuations of the potential energy in model $\mathcal{S}$ can be quantified in terms of the disorder susceptibility $\chi_\mathrm{dis}^\mathcal{S}$ which is a non-trivial function of temperature (cf.~Fig.~\ref{fig4}) and finite in the thermodynamic limit $N\to \infty$. In model $\mathcal{S}$, at very low temperatures, the dynamic susceptibility is dominated by the fluctuations due to the diameter disorder. Thus, if one is aiming at analyzing the ``true'' dynamic heterogeneities of a glassformer, that stem from the intrinsic thermal fluctuations, one may preferentially use model $\mathcal{D}$. Note that it is possible to calculate the same thermal susceptibility in model $\mathcal{S}$ as in model $\mathcal{D}$, however the calculation in $\mathcal{S}$ is more difficult, as it demands an additional average over the disorder, as shown in Sec.~\ref{sec:structural_dynamics}. This implies that model $\mathcal{S}$ requires more sampling in this case. Our findings are of particular importance regarding recent simulation studies of polydisperse glassforming systems in external fields \cite{guiselin2020overlap, RFIM_in_glassforming_liquid, lamp2022, lerner2019, rainone2020} where a model $\mathcal{S}$ approach was used to select the particle diameters. However, in these works sample-to-sample fluctuations due to the disorder in $\sigma$ have been widely ignored. Exceptions are the studies by Lerner {\it et al.}~\cite{lerner2019, rainone2020} where samples whose energy deviates from the mean energy by more than 0.5\% were just discarded. Here the use of a model $\mathcal{D}$ scheme would be a more efficient alternative. However, one should still keep in mind that with regard to a realistic description of experiments on polydisperse colloidal systems, it might be more appropriate to choose model $\mathcal{S}$.
2101.01694
\section{Introduction} \label{sec:introduction} The detailed study of young star cluster (YSC) kinematics is vital to understand their long-term evolution and survivability. Only if star clusters are massive enough they may overcome the so-called ``infant mortality'' \citep{Lada2003} surviving internal and external processes that disturb the gravitational potential, such as supernova explosions, which lead to abrupt, violent gas expulsion \citep[e.g.,][]{Goodwin2006,Bastian2006,PortegiesZwart2010}, the interaction or collision with a giant molecular cloud (GMC) in the Galactic disk or a change in the Galactic tidal field \citep[e.g.,][and references therein]{Krumholz2020}. The detailed kinematic analysis of such young, and still embedded clusters and their gaseous and stellar content down to the hydrogen burning limit (or even below) is challenging and has only become feasible in recent years with new telescopes, instruments, and computational methods. For example, \citet{Zari2019a} showed the three dimensional, highly substructured nature of the Orion Nebular Cloud (ONC) detecting multiple kinematic components with distinct age differences, while \citet{Jerabkova2019} suggest the detection of multiple populations in its young stellar population. \citet{McLeod2015} studied the detailed kinematics of the ``Pillars of Creation'' in the Eagle Nebula finding radial velocity (RV) differences of $\sim 2\,{\rm km}/{\rm s}$ between the different pillars. Their analysis also revealed a possible protostellar outflow and let them identify both lobes as a blue and a red shifted counterpart. Time domain studies \citep[e.g.,][]{Sabbi2020} have the capability of detecting protoplanetary disks, stellar variability, and the binary fraction, which is important to the evolution of the whole cluster system (we refer to \citet{PortegiesZwart2010} and \citet{Krumholz2020} for a detailed overview). The measurement of the internal kinematics of YSCs has been observationally very expensive. Especially for the determination of RVs, high resolution stellar spectra had to be obtained. For many fiber or slit spectrographs, only a handful of stars can be observed simultaneously. With the development of efficient, large field of view (FOV) integral field units (IFUs) over the past decade and the increasing computational capacities it has become possible to study the RV of entire stellar populations in resolved star clusters. The optical IFU with the largest FOV to date ($1\,{\rm arcmin}^2$) is the Multi Unit Spectroscopic Explorer \citep[MUSE,][]{Bacon2010} mounted at UT4 of the Very Large Telescope (VLT), which allows us to survey larger regions similar to photometric studies. MUSE has been proven to be an excellent instrument to spectroscopically map nearby star-forming regions to study the kinematics of their stars and the gas simultaneously \citep[e.g.,][for the latter two, hereafter Paper~1 and Paper~2]{McLeod2015,McLeod2020,Zeidler2018,Zeidler2019a}. In \citetalias{Zeidler2018} we showed that it is indeed possible to measure stellar RVs in YSCs to an accuracy of $\sim 2\,{\rm km}\,{\rm s}^{-1}$ using \dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}}, despite the lack of pre-main sequence (PMS) stellar spectral libraries and the variable and high local background. For a detailed description of the code and the assessment of the uncertainties we refer to \citetalias{Zeidler2019a}. This work is the third paper in a series \citepalias{Zeidler2018,Zeidler2019a} spectroscopically studying the Galactic young massive star cluster Westerlund 2 \citep[Wd2,][]{Westerlund1961} using MUSE data. Wd2 is the central ionizing star cluster of the \ion{H}{2} region RCW49 \citep{Rodgers1960} located in the Carina-Sagittarius spiral arm at a distance of $\sim4.16$\,kpc \citep{Zeidler2015,VargasAlvarez2013} at an age of 1--2\,Myr \citep{Zeidler2015}. With a total photometric stellar mass of $3.7\cdot10^4\,{\rm M}_\odot$, Wd2 is the second most massive young star cluster in the Milky Way \citep[MW,][]{Zeidler2017}, after Westerlund~1 \citep[$\sim5\cdot10^4\,{\rm M}_\odot$, e.g.,][]{Clark2005,Andersen2017}. This cluster is built from two coeval clumps, the MC and NC \citep{Hur2014,Zeidler2015} and is highly mass segregated \citep{Zeidler2017}. The close proximity, its numerous OB stars in the cluster center \citep[e.g.,][]{Rauw2004,Rauw2011,Bonanos2004,VargasAlvarez2013}, and its young age (so far no supernova explosion has been detected) make Wd2 a prime target to study the internal processes of star cluster formation and pre-supernovae evolution. In this work we present a detailed analysis of the dynamical state of Wd2 to determine whether it has a chance to overcome infant mortality, and to better understand its formation process. This paper is structured the following. In Sect.~\ref{sec:data} we give a brief introduction to the used data. In Sect.~\ref{sec:spat_dist} we reanalyze the spatial structure of Wd2 to obtain missing key parameters. In Sect.~\ref{sec:RV} we present a detailed analysis of the gas and stellar RVs including the dynamical state of Wd2. Sect.~\ref{sec:Gaia} introduces the data obtained from the \textit{Gaia} mission and in Sect.~\ref{sec:runaways} we analyze high-velocity stellar runaway candidates. In Sect.~\ref{sec:discussion} we provide a in-depth discussion about the results obtained in the previous sections and put them into a greater context, while in Sect.~\ref{sec:summary} we summarize the analysis and our findings. \section{The data and data reduction} \label{sec:data} We will only provide a brief overview of the dataset, data reduction, and RV measurements. A detailed description was presented in \citetalias{Zeidler2018} and \citetalias{Zeidler2019a}. The data and their derived products used in this work, such as stellar RVs, are identical to those of \citetalias{Zeidler2019a}. We surveyed Wd2 using 21.5\,h of VLT/MUSE time (Program ID: 097.C-0044(A), 099.C-0248(A), PI: P.~Zeidler). We combined 11 short (220\,s) and 5 long (3600\,s) exposures to simultaneously cover the gas, the high-mass, luminous OB stars, and the fainter PMS stars down to $\sim 1\,{\rm M}_\odot$. MUSE was operated in the extended mode covering a wavelength range of 4600--$9350\,{\rm \AA}$. The short exposures were executed in the wide-field mode without the adaptive optics (AO) system (WFM\_NOAO) while four of the five long exposures were executed in the wide-field mode with AO (WFM\_AO), which results in a spatial resolution improvement by a factor of two. Because in AO mode the notch filter of the Na-lasers blocks the coverage in the 5780--$5990\,{\rm \AA}$ range, we chose the WFM\_NOAO mode for the short exposures as to cover the \ion{He}{1}$\lambda$5876 line. The data was reduced with the \texttt{musereduce} module of the python package \dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}}\footnote{\dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}} is made available for download on Github \url{https://github.com/pzeidler89/MUSEpack.git}} \citep{Zeidler2019} together with the standard MUSE data reduction pipeline \citep{Weilbacher2012,Weilbacher2015}. In total we extracted 1726 stellar spectra with a mean signal-to-noise ratio\footnote{Whenever we refer to the S/N of spectra we always provide a S/N per spectral bin.} (S/N) $\ge5$ using the software package PampelMuse \citep{Kamann2013,Kamann2016} in combination with our deep, high-resolution, multi-band photometric star catalog extracted from \textit{Hubble} Space Telescope (HST) observations \citep[ID: 13038, PI: A. Nota,][]{Zeidler2015} to detect and de-blend the stellar spectra. The world coordinate system (WCS) of all data were corrected to match the \textit{Gaia} data release 2 \citepalias[DR2,][]{GaiaCollaboration2016,GaiaCollaboration2018}. \section{The stellar spatial distribution} \label{sec:spat_dist} \citet{Zeidler2015} confirmed the finding of \citet{Hur2014} that Wd2 is built from two subclumps, the main cluster (MC) and the northern clump (NC) with a projected separation of $\sim 1$\,pc at the distance of Wd2. We reanalyze the spatial structure using both the stellar surface density and the stellar mass density obtained from our photometric HST catalog down to the 50\% completeness limit of all cluster member stars \citep{Zeidler2017}. We use a maximum likelihood approach to fit two peaks to the density distributions including a common offset to account for a halo of lower-mass stars and test two different distributions: 1) two 2D Gaussian profiles and 2) the Elson-Fall-Freeman (EFF) profile \citep{Elson1987}. The latter is an empirical surface density profile as a function of $r$ that was found to well describe the surface density of massive YSCs in the MW. It has the form: \begin{equation} \label{eq:EFF} \Sigma(r) = \Sigma_0 \left(1+\frac{r^2}{a^2}\right)^{-\sfrac{\gamma}{2}}, \end{equation} with $\Sigma_0$ being the peak surface density and $a$ being a scale parameter. The core radius, $r_c$, used by the \citet{King1966} profile (to fit Globular Cluster profiles) is: \begin{equation} \label{eq:EFF_rc} r_c = a \left(2^{\sfrac{2}{\gamma}} -1\right)^{\sfrac{1}{2}}, \end{equation} where $\gamma$ and $a$ are the EFF profile parameters \citep[for a detailed summary see also][]{PortegiesZwart2010}. After running extensive Markov-Chain Monte Carlo (MCMC) fitting of the two density distributions to the data, the Akaike information criterion \citep[AIC, ][]{Akaike1974}, the Bayesian information criterion \citep[BIC, ][]{Schwarz1978}, and the Watanabe -- Akaike information criterion \citep[WAIC, ][]{Watanabe2010,Gelman2013} clearly favors the EFF model over a Gaussian distribution. The best-fit parameters for the mass and number density distributions are shown in Tab.~\ref{tab:spat_dist} and Fig.~\ref{fig:spat_dist}. \begin{deluxetable*}{ccccllrrr}[htb] \tablecaption{The best-fit surface density parameters \label{tab:spat_dist}} \tablehead{\multicolumn{1}{c}{ } & \multicolumn{1}{c}{Region} & \multicolumn{1}{c}{R.A.} & \multicolumn{1}{c}{Dec.} & \multicolumn{1}{c}{$\Sigma_0$} & \multicolumn{1}{c}{$\Sigma_{bck}$} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$r_c$} & \multicolumn{1}{c}{$\gamma$} \\ \multicolumn{2}{c}{ } &\multicolumn{1}{c}{(J2000)} &\multicolumn{1}{c}{(J2000)} & \multicolumn{1}{c}{$({\rm arcmin}^{-2})$} & \multicolumn{1}{c}{$({\rm arcmin}^{-2})$} & \multicolumn{1}{c}{(pc)} & \multicolumn{1}{c}{(pc)} & \multicolumn{1}{c}{} } \startdata \multirow{2}{*}{$\Sigma_{\rm num}$} & MC & $10^{\rm h}24^{\rm m}01^{\rm s}.788$ & $-57^{\circ}45^{\rm m}28^{\rm s}.63$ & $3.02 \cdot 10^4$ & \multirow{2}{*}{68.27} & $0.53 \pm 0.01$ & $0.20 \pm 0.01$ & $10.47 \pm 0.01$\\ & NC & $10^{\rm h}24^{\rm m}02^{\rm s}.438$ & $-57^{\circ}44^{\rm m}41^{\rm s}.28$ & $1.00 \cdot 10^3$ & & $0.78 \pm 0.01$ & $0.30 \pm 0.02$ & $10.07 \pm 0.05$ \\[0.2cm] \multirow{2}{*}{$\Sigma_{\rm mass}$} & MC & $10^{\rm h}24^{\rm m}01^{\rm s}.735$ & $-57^{\circ}45^{\rm m}29^{\rm s}.60$ & $3.72 \cdot 10^4\,{\rm M}_\odot$ & \multirow{2}{*}{$70.82\,{\rm M}_\odot$} & $0.44 \pm 0.01$ &$0.20 \pm 0.01$ & $7.61 \pm 0.01$\\ & NC & $10^{\rm h}24^{\rm m}02^{\rm s}.521$ & $-57^{\circ}44^{\rm m}39^{\rm s}.00$ & $1.86 \cdot 10^3\,{\rm M}_\odot$ & & $0.59 \pm 0.01$ & $0.26 \pm 0.01$ & $7.63 \pm 0.02$\\ \enddata \tablecomments{The sub clump parameters of the best fit EFF model based on the number density (first two rows) and the mass density (second two rows), for the MC and NC. At a distance of 4.16\,kpc, a projected distance of 50\,arcsec are 1\,pc.} \end{deluxetable*} The dynamical evolution of a star cluster is highly driven by its mass distribution and, therefore, we will use the mass density as reference distribution. We define the coordinates of the Wd2 cluster (${\rm R.A.} = 10^{\rm h}24^{\rm m}02^{\rm s}.128$, ${\rm Dec.} = -57^{\circ}45^{\rm m}04^{\rm s}.30$) as the geometric mean of the centers of the two clumps, similar to the definition in \citet{Zeidler2015}. Integrating over the mass density distribution leads to a total clump mass above the 50\% completeness limit of $m_{\rm MC}^{50} = (0.55 \pm 0.01)\cdot 10^4\,{\rm M}_\odot$ and $m_{\rm NC}^{50} = (0.05 \pm 0.01)\cdot 10^4 \,{\rm M}_\odot$, which agrees with the masses estimated using the stellar mass function \citep{Zeidler2017}. The half-mass radius for the EFF profile is defined as: \begin{equation} \label{eq:EFF_hm} r_{\rm hm} = a \left(0.5^{\frac{2}{2-\gamma}} -1\right)^{\sfrac{1}{2}}, \end{equation} and yields $r_{\rm hm} = (0.23 \pm 0.01)\,{\rm pc}$ and $r_{\rm hm} = (0.31 \pm 0.01)\,{\rm pc}$ for the MC and NC, respectively\footnote{For the detailed error propagation see eq.~\ref{eq:sEFF_mass_tot} to \ref{eq:sEFF_rhm}}. The exponential decline of the stellar distribution is with $\gamma_{\rm MC} = 7.61 \pm 0.01$ and $\gamma_{\rm NC} = 7.63 \pm 0.02 $ steeper than observed in other YSCs \citep[typical values are $\gamma=2$--3, e.g.,][]{Elson1987,PortegiesZwart2010}, which may be explained by the composite nature of Wd2, the high degree of mass-segregation, and that the distribution is only fit to the stars above the 50\% completeness limit, which increases core densities. \begin{figure*}[htb] \includegraphics[width=0.95\textwidth]{spat_dist.png} \caption{The star and mass density distributions of the completeness corrected photometric star catalog of Wd2 down to the 50\% completeness limit. On the top left are the observed stellar mass and number density distributions and on bottom left are shown the simulated density distributions. The core radii for the MC and NC are indicated by the dashed circles. On right we show the HST $F814W$ image with the core radii $r_{\rm c}$, the half-mass radii $r_{\rm hm}$, and the scale parameters $a$, as well as the MC and NC centers of the best-fit EFF model for the mass density. The red asterisk marks the center of Wd2 defined by the geometric mean between the MC and the NC. The black outline marks the cluster region of Wd2.} \label{fig:spat_dist} \end{figure*} For any further analysis of the cluster stellar population we define the size of Wd2 as the combined, encircled area of 1.5 times the radius (around each clump)\footnote{The factor of 1.5 is chosen such that the irregularly, elongated shape of the stellar distribution (see top, left frames of Fig.~\ref{fig:spat_dist}) is taken into account.}, at which the stellar mass density drops to the halo (background) density of $\Sigma_{\rm bck} = 70.82\,{\rm M}_\odot$ (black lines in Fig.~\ref{fig:spat_dist}). \section{The radial velocity profile} \label{sec:RV} To measure RVs\footnote{Throughout the paper we may use the term ``velocity'' interchangeable for ``radial velocity'' if it is clear from context.} we used our new method that allows us to measure stellar RVs without the need of a spectral template library, which was implemented in the \texttt{RV\_spectrum} module of \dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}}. \texttt{RV\_spectrum} uses strong stellar absorption lines in combination with a Monte Carlo approach to measure stellar RVs to an accuracy of $1.10\,{\rm km}\,{\rm s}^{-1}$. A detailed description of \dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}}, the measurements of the stellar RVs and the underlying assumptions and sources of RV uncertainties are provided in \citetalias{Zeidler2019a}. \subsection{\texttt{RV\_spectrum} -- a new way to measure RVs} \label{sec:RV_spectrum} To aid the reader in understanding the further analyses, we provide a brief summary of the key steps for measuring RVs with the \texttt{RV\_spectrum} class of\dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}}: \begin{enumerate} \item To provide a clean sample of spectra, a visual inspection of all extracted spectra is necessary. It ensures that the local background subtraction was successful and that the spectral lines used for the fit do not show signs of emission, which is common for PMS stars and is the result of accretion processes. \item The regions around each of the chosen absorption lines are fitted, using a user-provided spectral line library, together with a low-order polynomial to match the local continuum. A spectral template is created using the line parameters of the best-fitting solution and the rest-frame wavelengths. \item These templates are cross-correlated with the stellar spectra using the core of each line, which provides a RV measurement per absorption line. This cross-correlation is typically repeated 10,000 times and for each iteration the uncertainties of the spectrum are randomly reordered. A sigma clipping is applied that ensures that lines with ``odd'' profiles are removed from the final RV fit. \item The remaining, trustworthy, lines are now cross-correlated together with a typical repetition of 20,000 times. The resulting Gaussian distribution gives the RV of the star (mean) and the uncertainty ($1\sigma$). \end{enumerate} Extensive tests of this method are described in \citetalias{Zeidler2019a} to ensure its reliability and to show its limitations and possible sources for errors. In Appendix~\ref{sec:plots} we show the extracted and fitted spectra of four different Wd2 member stars (see Fig.~\ref{fig:spec_fit_HeIHeII_9183}, \ref{fig:spec_fit_HeIHeII_7613}, and \ref{fig:spec_fit_MgICaII}.) \subsection{The gas velocities} \label{sec:RV_gas} To obtain the gas velocity profile we use a similar approach as \citet{McLeod2015} by stacking the spectra of individual gas emission lines to a single spectral line per spatial pixel (spaxel). This stacking results in a well-sampled line, which is fit by a Gaussian profile to measure the RVs on a spaxel-by-spaxel basis. This method is computationally more efficient than measuring RVs with \dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}}~ but it is only applicable if there are a significant number of strong, non-blended spectral lines available that have a reasonably flat continuum. We used the \texttt{\uppercase{python}} packages \texttt{pyspeckit} \citep{Ginsburg2011} and \texttt{spectral\_cube}\footnote{\url{https://spectral-cube.readthedocs.io/en/latest/}} to combine the H$ \alpha$, the \ion{N}{2}\,$\lambda\lambda6549.85, 6585.28\,{\rm \AA}$, and the \ion{S}{2}\,$\lambda\lambda6718.29, 6732.67\,{\rm \AA}$ emission lines. The continuum was extracted in the spectral range of 6620--$6660\,{\rm \AA}$. We avoid using the [\ion{O}{1}]\,$\lambda\lambda 6300, 6363{\rm \AA}$ emission lines due to the applied ``modified sky subtraction'' \citepalias{Zeidler2019a}, which properly recovers these lines but it may slightly change their centroids due to Telluric residuals. To obtain the mean gas velocity of the \ion{H}{2} region small differences in the velocities of the individual gas components can be neglected (e.g., \citealp{McLeod2016}, \citetalias{Zeidler2018}). Additionally, we masked the stars and extrapolated the gas velocities at each stellar position to get a clean gas velocity map (see left frame of Fig.~\ref{fig:EBV_RVgas}). The median RV of the gas, determined from all MUSE pixels, is $15.9\,{\rm km\,s}^{-1}$, which we will use henceforth as the systemic RV of Wd2 relative to the Sun. This systemic velocity is subtracted from all further RV measurements in this study unless stated otherwise. \begin{figure*}[htb] \includegraphics[width=\textwidth]{EBV_RVgas.png} \caption{Left: The RV map of the gas. The bottom numbers of the color bar represent the measured gas RVs while the top numbers mark the RVs corrected for the systemic motion of Wd2 ($15.9\,{\rm km\,s}^{-1}$). Right: The $E(B-V)$ color excess map \citep[similar to][]{Zeidler2015} at a resolution of 0.8\,arcsec representing the average seeing of the MUSE dataset (Paper 2). The outline of the gas RV map is over plotted to orient the reader. In both frames are marked the centers of the MC and the NC including their scale parameters $a$ in green as defined in Sect.~\ref{sec:spat_dist}.} \label{fig:EBV_RVgas} \end{figure*} The gas velocity profile (left frame of Fig.~\ref{fig:EBV_RVgas}) clearly shows that the central part of the cloud is receding while the outer ridges move toward us. When comparing the gas RV map with the extinction map \citep[see right frame of Fig.~\ref{fig:EBV_RVgas} and][]{Zeidler2015}, we see a correlation between the magnitude of the $E(B-V)$ color excess and the gas motion. By comparing the average gas velocity with the average $E(B-V)$ color excess (left frame of Fig.~\ref{fig:EBV_RVgas_analysis}) we indeed see significantly higher $E(B-V)$ values at negative RVs. Locations with a lower line-of-sight extinction allow us to look deeper into the gas cloud\footnote{We note here that the \citet{Zeidler2015} color-excess map does not distinguish between the extinction caused by the \ion{H}{2} region and the foreground extinction. Given the relatively small FOV ($\sim 5' \times 5'$) of the survey area, we do not expect any significant variations of the foreground extinction. The regions with low extinction in the \citet{Zeidler2015} color-excess map are in agreement with the foreground color-excess, $E(B-V)_{\rm fg}=1.05$\,mag, estimated by \citet{Hur2014}.}. We conclude that we actually see the expansion of the \ion{H}{2} region driven by the stellar winds and the far ultraviolet (FUV) radiation of the numerous OB stars of Wd2 (e.g., \citealp{Rauw2004,Rauw2005,Rauw2007,Bonanos2004,VargasAlvarez2013,Drew2014}, \citetalias{Zeidler2018}). To better visualize the expansion we show a black-white version of the gas RV map (right frame of Fig.~\ref{fig:EBV_RVgas}) in which we marked the bottom (blue) and the top (red) 10\% of the RV distribution, as well as, the gas with $v_{\rm sys}\pm 1\,{\rm km}\,{\rm s}^{-1}$. This suggests a differential RV of the \ion{H}{2} region of $\sim 15\,{\rm km\,s}^{-1}$. \begin{figure*}[htb] \includegraphics[width=\textwidth]{EBV_RVgas_analysis.png} \caption{Left: The pixel-by-pixel $E(B-V)$ color excess vs. the gas RVs. In orange we mark the velocity binned extinction average showing a clear correlation between the two. The red-dashed line shows the median extinction. Right: The gas RV map marked with the bottom (blue) and the top (red) 10\% of the RV distribution as well as, the gas with $v_{\rm sys}\pm 1\,{\rm km}\,{\rm s}^{-1}$. These RV ranges are also marked on the bottom of the left frame. The center of the clumps as well as the scale radii $a$ (dashed circles) are plotted to orient the reader.} \label{fig:EBV_RVgas_analysis} \end{figure*} \subsection{The stellar radial velocities} \label{sec:RV_stars} To measure stellar RVs we used the following spectral absorption lines, depending on the stellar type: \ion{He}{1}\,$\lambda\lambda\,4922, 5876, 6678, 7065\,{\rm \AA}$, \ion{He}{2}\,$\lambda\lambda\,4685, 5412\,{\rm \AA}$, \ion{Mg}{1}\,$\lambda\lambda\,5367, 5172, 5183\,{\rm \AA}$, \ion{Na}{1}\,$\lambda\lambda\,5889, 5895\,{\rm \AA}$, and \ion{Ca}{2}\,$\lambda\lambda\,8498, 8542, 8662\,{\rm \AA}$. We intentionally avoided other strong absorption lines, such as Balmer lines since these may be unreliable for RV measurements due to the young stellar age, the possible ongoing accretion processes, and nebular contamination. An overview of the applied method is given in Sect.~\ref{sec:RV_spectrum} and an in-depth analysis of the underlying assumptions, the selection criteria, as well as the limitations and uncertainties is presented in \citetalias{Zeidler2019a}. In total we extract reliable RVs from 388 stars. Based on the $F814W-F160W$ vs. $F814W$ and the $F555W-F814W$ vs. $F555W$ color-magnitude diagrams (CMDs) created with our HST photometric catalog \citep{Zeidler2015}, 117 source are located in the Wd2 cluster and 271 are foreground field stars. The high extinction ($A_V=6.12$\,mag) toward Wd2 allows for a clean separation between cluster members and field stars. The typical (mean) RV uncertainties are $\sigma_{\rm typ}^{\rm clm} = 1.96\,{\rm km}\,{\rm s}^{-1}$ and $\sigma_{\rm typ}^{\rm field} = 1.87\,{\rm km}\,{\rm s}^{-1}$ for the cluster members and MW field stars, respectively. In Fig.~\ref{fig:RV_dist} we show the stellar RV distribution of the cluster members (top panel) and field stars (bottom panel). The field stars span over a wider RV range than the cluster members but generally their RV spaces overlap. This is expected due to the location of Wd2 close to the tangent point of the Carina-Sagittarius spiral arm. To create the RV histograms we use a running mean with a step size of $0.1\,{\rm km}\,{\rm s}^{-1}$ and a bin width of the typical uncertainty. This method reduces a possible bias caused by binning the data. \begin{figure}[htb] \plotone {velocity_dist.png} \caption{The RV distribution of the cluster members (top panel) and field stars (bottom panel). The step size is $0.1\,{\rm km}\,{\rm s}^{-1}$ with a bin width using the typical uncertainties of $\sigma_{\rm typ}^{\rm clm} = 1.96\,{\rm km}\,{\rm s}^{-1}$ and $\sigma_{\rm typ}^{\rm field} = 1.87\,{\rm km}\,{\rm s}^{-1}$.} \label{fig:RV_dist} \end{figure} \subsection{The Wd2 velocity profile} From the isochrone fitting to CMDs \citep{Zeidler2015,Sabbi2020} we know that the PMS turn-on is at $\sim 3-5\,{\rm M}_\odot$, which means that most O and B stars are already in their main-sequence (MS) phase. Therefore, we divide the stars into three groups: \begin{itemize} \item[1.] O-stars: showing \ion{He}{1} and \ion{He}{2} absorption features (16 stars), \item[2.] B-stars: showing \ion{He}{1} but no \ion{He}{2} absorption features (26 stars), and \item[3.] later type stars: showing metal features, such as \ion{Mg}{1}-Triplet or \ion{Ca}{2}-Triplet (75 stars). \end{itemize} These groups divide the stars into different evolutionary stages and certain mass ranges. As the next step we create RV histograms of the three groups (see Fig.~\ref{fig:RV_dist_grouped}) with the same method as Fig.~\ref{fig:RV_dist}. \begin{figure*}[htb] \plotone {velocity_dist_grouped.png} \caption{The normalized RV distribution of the cluster member O-stars (top panel), B-stars (middle panel), and later-type PMS stars (bottom panel). The histograms are created in the same way as Fig.~\ref{fig:RV_dist}. The orange line represents the cumulative RV distribution. The results of the MCMC fits are shown by the dashed Gaussians. At the bottom of each panel we mark the mean RV as well as the velocity dispersion of each RV group. The numbers indicate the bona-fide stars per RV group (${\rm rv}\pm1\sigma$).} \label{fig:RV_dist_grouped} \end{figure*} The histograms immediately reveal a significant difference in the velocity distribution of the stellar types. The more massive the stars, the smaller their velocity dispersion. To quantify this initial analysis we fit a combination of Gaussians to the RV histograms using MCMC. This allows us to properly account for the individual RV uncertainties. The three distributions are best described by a single Gaussian for the O-stars, a combination of two Gaussians for the B-stars, and five Gaussians for the PMS stars. Using the AIC and BIC as well as the convergence of the MCMC fit we ensure that five velocity groups are the best fitting number of components without over fitting the distribution. For completeness, the corresponding corner plots can be found as Fig.~\ref{fig:corner_obstars} and \ref{fig:corner_pms} in the Appendix~\ref{sec:plots}. The results of the fits are also shown in Fig.~\ref{fig:RV_dist_grouped} and listed in Tab.~\ref{tab:RV_stars_comp}. The uncertainties are represented by one standard deviation of the marginalized distributions reflecting the contribution of the individual stellar RV uncertainty measurements. The inspection of the O-star histogram shows a possible second peak at $\sim (9\pm4)\,{\rm km}\,{\rm s}^{-1}$ but the fit does not converge. \begin{deluxetable}{lrrrr}[htb] \tablecaption{The stellar RV components \label{tab:RV_stars_comp}} \tablehead{\multicolumn{1}{c}{name} & \multicolumn{1}{c}{RV} & \multicolumn{1}{c}{$\sigma$RV} & \multicolumn{2}{c}{n (stars)} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{(${\rm km}\,{s}^{-1}$)} &\multicolumn{1}{c}{(${\rm km}\,{s}^{-1}$)} & \multicolumn{1}{c}{all} & \multicolumn{1}{c}{Wd2}} \startdata \multicolumn{5}{c}{O-stars} \\ O1 (purple) & $0.21\pm0.08$ & $1.99\pm0.08$ & 5 & 5\\[0.1cm] \multicolumn{5}{c}{B-stars} \\ B1 (dark blue) & $-5.18\pm0.24$& $2.96\pm0.21$ & 9 & 9\\ B2 (light blue) & $3.90\pm0.36$ & $4.86\pm0.35$ & 10 & 9\\[0.1cm] \multicolumn{5}{c}{PMS-stars} \\ PMS1 (yellow) & $-16.75\pm0.33$ & $5.18\pm0.35$ & 15 & 12\\ PMS2 (green) & $-8.40\pm0.38$ & $1.85\pm0.33$ & 9 & 6\\ PMS3 (red) & $-3.57\pm0.19$ & $2.00\pm0.22$ & 14 & 14\\ PMS4 (brown) & $3.10\pm0.16$ & $2.46\pm0.36$ & 8 & 5\\ PMS5 (black) & $9.33\pm0.14$ & $1.81\pm0.14$ & 7 & 5\\ \enddata \tablecomments{The results of the MCMC fit to the stellar RV distributions. Column 1 is the velocity group name used throughout the rest of this work. The indicated colors correspond to the ones used in the figures. Column 2 shows the mean RVs while Column 3 shows the velocity dispersion of each component. Column 4 and 5 are the number of stars located within $1\sigma$ of each RV component inside the survey area and inside the Wd2 cluster (as defined in Sect.~\ref{sec:spat_dist}).} \end{deluxetable} To investigate the origin of the individual stellar velocity groups we analyze the spatial location of the stars within one standard deviation of the mean of each velocity group (error bars in Fig.~\ref{fig:RV_dist_grouped}). Additionally, the stars also need to be located within the Wd2 cluster as defined in Sect.~\ref{sec:spat_dist}. These two limitations ensure that we only analyze stars that can be uniquely identified with one velocity group and whose locus coincides with the immediate cluster, which leaves us with 5 to 14 stars for each group (see Tab.~\ref{tab:RV_stars_comp}). We use a 2D kernel density estimator (KDE) with a Gaussian kernel to better visualize the number density of the (relatively low number of) stars in each velocity group (see Fig.~\ref{fig:kde}). \subsubsection{The O and B stars} The O and B stars are mostly concentrated toward the center of Wd2 with the majority co-located with the MC. This is in complete agreement with a highly mass-segregated cluster. There is no apparent correlation between the spatial location of the stars and the two velocity groups of the B stars. Possible undetected binaries can be excluded as a source for the two peaks of the B-star distribution. The dispersion of MUSE is 2.4\,\AA, which means that a minimum relative velocity of $\sim 80$--$160\,{\rm km}\,{\rm s}^{-1}$ is necessary to detect line splitting caused by the binary components (compared to the $(9.08\pm0.43)\,{\rm km}\,{\rm s}^{-1}$ between B1 and B2, see Tab.~\ref{tab:RV_stars_comp}). The visual inspection of all used spectra does not hint for any such line splitting. \subsubsection{The late-type PMS stars} To avoid confusion due to five groups and the larger number of stars, we divide the PMS stars into three different plots (bottom frames of Fig.~\ref{fig:kde}). The stars with an RV of $(-16.75 \pm 5.18)\,{\rm km}\,{\rm s}^{-1}$ (PMS1, yellow) appear to be distributed throughout the cluster region with the highest concentration of stars aligned with the the center of Wd2 along the MC--NC axis. The stars with RVs of $(-8.40 \pm 1.85)\,{\rm km}\,{\rm s}^{-1}$ (PMS2, green)) and $(3.10 \pm 2.46)\,{\rm km}\,{\rm s}^{-1}$ (PMS4, brown) tend to be located North of the cluster center. The remaining two RV groups with $(-3.57 \pm 2.00)\,{\rm km}\,{\rm s}^{-1}$ (PMS3, red) and $(-9.41 \pm 1.82)\,{\rm km}\,{\rm s}^{-1}$ (PMS5, black) are more concentrated toward the MC. Interestingly, the difference of the mean velocities of groups PMS2-PMS4 and PMS3-PMS5 is very similar with $(11.50 \pm 0.41)\,{\rm km}\,{\rm s}^{-1}$ and $(12.90 \pm 0.24)\,{\rm km}\,{\rm s}^{-1}$, respectively. \begin{figure*}[htb] \plotone{RV_spat_dist_kde.png} \caption{The spatial distribution of the individual RV groups for the O and B stars (top frames) and the later-type PMS stars (bottom three frames). The colors scheme is identical to Fig.~\ref{fig:RV_dist_grouped} for the velocity groups. We divided the later-type PMS stars into three individual plots to avoid confusion. The contours represent the spatial stellar density of each stellar velocity group determined via a KDE. The white dashed circles mark the scale radii $a$ and the dash-dotted lines are the Wd2 cluster are (see Sect.~\ref{sec:spat_dist}).} \label{fig:kde} \end{figure*} \subsubsection{The spatial location of the velocity groups} The number of stars per velocity group is fairly low and to quantify the spatial correlation we use a 2D Kolmogorov-Smirnov (KS) test \citep{Hodges1958,Peacock1983,Fasano1987}. The null-hypothesis ($H_0$) we use is: two individual velocity groups follow the same spatial distribution. This means that if $H_0$ is true the $p$-value is larger than the significance level $\alpha$. We test $H_0$ against $\alpha = 5\%$ (confidence level: 95\%). In addition we also test their spatial locations against all Wd2 members of the HST photometric star catalog and the onse detected with MUSE. The resulting $p$-values are presented in Tab~\ref{tab:KS}. We also apply the 1D KS test we transform the two-dimensional location of the stars into one dimensional distribution by creating a cumulative distribution of the stars' distances to a reference point. The results of the 1D KS test highly vary with the choice of the reference point so we decided that the 1D KS test is not suited for our purposes. The KS-test results can be summarized the following: \begin{itemize} \item HST vs. MUSE catalog: A $p$-value of 0.274 shows that their underlying spatial distribution is the same. This minimizes the chance of introducing correlations based on detection effects, such as completeness. \item Groups O1, B1, and B2: The 2D KS test confirms that these groups are spatially correlated to each other and to the full Wd2 MUSE catalog, in agreement with a mass segregated star cluster. We must note here that the $p$-value between the velocity groups B1 and B2 is only marginally significant. \item PMS2 -- PMS4: With $p = 0.045$ the correlation is only marginally significant. Yet, given the inspection of the KDE plot and that it is by far the highest $p$-value with respect to the other PMS groups let us conclude that these two groups are spatially correlated. \item PMS3 -- PMS5: The $p$-values suggest that these groups are correlated to each other but also to the O1, B1, B2, and PMS1 groups. Given their much different RV profile suggest that this is only the case because they are centered around the MC. \end{itemize} The 2D KS tests confirm our initial analysis about the spatial correlation of the PMS3 and PMS5 groups with the MC and the PMS2 and PMS4 groups with the NC. The PMS1 group is consistent with a group that follows the spatial distribution of the Wd2 cluster. \begin{deluxetable*}{rrrrrrrrrrr}[htb] \tablecaption{The KS-test of the stellar spatial distributions \label{tab:KS}} \tablehead{\multicolumn{1}{c|}{}& \multicolumn{1}{c}{HST}& \multicolumn{1}{c}{MUSE}& \multicolumn{1}{c}{O1}& \multicolumn{1}{c}{B1}& \multicolumn{1}{c}{B2}& \multicolumn{1}{c}{PMS1}& \multicolumn{1}{c}{PMS2}& \multicolumn{1}{c}{PMS3}& \multicolumn{1}{c}{PMS4}& \multicolumn{1}{c}{PMS5}} \startdata \multicolumn{1}{r|}{HST} & & \textbf{0.274} & \textbf{0.096} & \textbf{0.585} & 0.012 & \textbf{0.120} & 0.005 & \textbf{0.228} & 0.008 & \textbf{0.193} \\ \multicolumn{1}{r|}{MUSE}& \textbf{0.274} & & \textbf{0.108} & \textbf{0.889} & \textit{0.047} & \textbf{0.262} & 0.014 & \textbf{0.473} & 0.008 & \textbf{0.197} \\ \multicolumn{1}{r|}{O1} & \textbf{0.096} & \textbf{0.108} & & \textbf{0.102} & \textbf{0.360} & 0.037 & 0.036 & \textbf{0.155} & 0.022 & \textbf{0.401} \\ \multicolumn{1}{r|}{B1} & \textbf{0.585} & \textbf{0.889} & \textbf{0.102} & & \textbf{0.124} & \textbf{0.458} & \textbf{0.058} & \textbf{0.581} & 0.035 & \textbf{0.351} \\ \multicolumn{1}{r|}{B2} & 0.012 & \textit{0.047} & \textbf{0.360} & \textbf{0.124} & & 0.021 & 0.003 & \textbf{0.143} & 0.002 & \textbf{0.119} \\ \multicolumn{1}{r|}{PMS1}& \textbf{0.120} & \textbf{0.262} & 0.037 & \textbf{0.458} & 0.021 & & 0.022 & \textbf{0.102} & 0.028 & \textbf{0.168} \\ \multicolumn{1}{r|}{PMS2}& 0.005 & 0.014 & 0.036 & \textbf{0.058} & 0.003 & 0.022 & & 0.014 & \textit{0.045} & 0.014 \\ \multicolumn{1}{r|}{PMS3}& \textbf{0.228} & \textbf{0.473} & \textbf{0.155} & \textbf{0.581} & \textbf{0.143} & \textbf{0.102} & 0.014 & & 0.007 & \textbf{0.242} \\ \multicolumn{1}{r|}{PMS4}& 0.008 & 0.008 & 0.022 & 0.035 & 0.002 & 0.028 & \textit{0.045} & 0.007 & & 0.027 \\ \multicolumn{1}{r|}{PMS5}& \textbf{0.193} & \textbf{0.197} & \textbf{0.401} & \textbf{0.351} & \textbf{0.119} & \textbf{0.168} & 0.014 & \textbf{0.242} & 0.027 & \\ \enddata \tablecomments{This table shows the $p$-values of the 2D KS test for the spatial correlation between the individual velocity groups. To guide the reader's eye all $p$-values above 0.05 are marked in bold face, while the $p$-values that are marginally below 0.05 are marked in italic.} \end{deluxetable*} \subsubsection{Are the velocity groups a result of small number statistics?} \label{sec:smal_number_stats} Even though our sample consist of 117 cluster member stars, dividing them into 8 individual velocity groups leaves only a handful of stars per group (see Tab.~\ref{tab:RV_stars_comp}), which raises the question whether the individual groups are a result of small number statistics. In the following we estimate the likelihood that the five PMS RV groups are the result of a random occurrence by simulating 300 realizations of the PMS stellar RV distribution using Bayesian sampling. We test two different scenarios: \begin{itemize} \item [1.] The true RV distribution only has one broad peak; \item [2.] The true RV distribution is similar to the distribution of the B-stars including a blue shifted component (representing the PMS1 peak). \end{itemize} For the first scenario, we sample the 300 different realizations using a likelihood distribution of one broad Gaussian with a mean velocity of $-2.12\,{\rm km}\,{\rm s}^{-1}$ and a velocity dispersion of $11.09\,{\rm km}\,{\rm s}^{-1}$, estimated from the cluster member RV distribution (see Fig.~\ref{fig:RV_dist}). For the second scenario we use a combination of three Gaussians, representing the PMS1, B1, and B2 groups (see Tab.~\ref{tab:RV_stars_comp}). For both scenarios the RV uncertainties are sampled from a likelihood distribution of the form $p(\sigma) \propto \sigma \cdot e^{\sfrac{-\sigma}{a}}$, which is a good (empirical) fit to the observed uncertainties (see Fig.~\ref{fig:err_dist}). We then try to recover the true RV distribution of each scenario, as well as five RV groups. We use the same priors and technique as we used for the real data. To analyze the results we compare the number of MCMC runs that reach convergence\footnote{We consider only those that converged to one value for each parameter and each parameter stays well within set boundaries. The latter ensures physically meaningful results (e.g., no negative amplitudes).}. For the first scenario in only 16.0\% (48 out of 300 realizations) we find five peaks while in 80.3\% (241 out of 300 realizations) the underlying true distribution can be recovered. For scenario two in 26.0\% (75 out of 300 realizations) we find five peaks while in 75.0\% (225 out of 300 realizations) the underlying distribution can be recovered. These results, in combination with the correlation of the spatial location of the stars and their membership to certain RV groups, suggest that it is unlikely that small number statistics are the reason for the five groups. Although the five RV peaks are the most probable result, a second, independent dataset like higher resolution spectroscopy or high-precision astrometry (see Sect.~\ref{sec:Gaia}) may provide an independent confirmation. \subsection{The dynamical state} \label{sec:duy_state} To determine whether this cluster has the chance of overcoming the ``infant mortality'' we will make an assessment of its dynamical state by estimating its dynamical mass $M_{\rm dyn}$, the viral radius $r_{\rm vir}$, and the dynamical time $t_{\rm dyn}$ or crossing time \citep[detailed derivations and discussions of these parameters can be found in][and references therein]{Spitzer1987,Fleck2006,PortegiesZwart2010,Krumholz2020,Adamo2020}. The dynamical mass is defined as follows: \begin{equation} \label{eq:M_dyn} M_{\rm dyn} = \eta \left(\frac{\sigma^2 r_{\rm hm}}{G}\right), \end{equation} where $\sigma$ is the 1D velocity dispersion, $G$ the gravitational constant, $r_{\rm hm}$ the half-mass radius, and $\eta$ is a dimensionless parameter to link observational accessible parameters with theory and is typically $\eta = 9.75$ for clusters with $\gamma > 4$ (see Tab.~\ref{tab:spat_dist} for Wd2 parameters). For this value, the virial radius is $r_{\rm vir} = 1.625 \cdot r_{\rm hm}$. The typical assumption is that massive star clusters are spherically symmetric, which is not the case for Wd2. Therefore, we decided to analyze the following cases: 1) the MC and NC are separate clusters; 2) the MC and NC are located at the same position in space with $r_{\rm hm,Wd2} = (r_{\rm hm,MC} + r_{\rm hm,NC}) / 2$; 3) the half mass radius of Wd2 incorporates both the MC and the NC, hence $r_{\rm hm,Wd2} = r_{\rm hm,NC} + r_{\rm hm,MC} + d({\rm MC},{\rm NC})$; and 4) a thought experiment on which is the minimal necessary half-mass radius for a bound spherical cluster with the photometric mass of Wd2. The results are: \begin{itemize} \item[1.] Due to the high degree of mass segregation we use the mean velocity dispersion of the PMS velocity groups PMS3,PMS5: $(2.16 \pm 0.49)\,{\rm km}\,{\rm s}^{-1}$, and PMS2,PMS4: $(1.91 \pm 0.26)\,{\rm km}\,{\rm s}^{-1}$, for the MC and NC, respectively. These yield $M_{\rm dyn,MC} = (1.9 \pm 0.5)\cdot10^3\,{\rm M}_\odot$ and $M_{\rm dyn,NC} = (3.3 \pm 1.4)\cdot10^3\,{\rm M}_\odot$. \item[2.] The assumed half-mass radius of Wd2 is $r_{\rm hm,Wd2} = (0.13\pm0.04)\,{\rm pc}$. The velocity dispersion, $\sigma_{\rm Wd2} = (11.09 \pm 1.36)\,{\rm km}\,{\rm s}^{-1}$, is estimated from the cluster member velocity distribution (see Fig.~\ref{fig:RV_dist}). This yields a dynamical mass of $M_{\rm dyn,Wd2} = (7.5 \pm 1.9)\cdot10^4\,{\rm M}_\odot$. \item[3.] The assumed half-mass radius of Wd2 incorporating the MC and NC is $r_{\rm hm,Wd2} = (1.57\pm0.01)\,{\rm pc}$, which leads to a dynamical mass of $M_{\rm dyn,Wd2} = (4.4 \pm 1.1)\cdot10^5\,{\rm M}_\odot$ (with $\sigma_{\rm Wd2}$ as in 2.). \item[4.] We assume $M_{\rm dyn} = M_{\rm phot} = (3.7 \pm 0.8)\cdot10^4\,{\rm M}_\odot$ \citep{Zeidler2017}. With $\sigma_{\rm Wd2}$ as in 2., the half mass radius is $r_{\rm hm} = (0.13\pm0.04)\,{\rm pc}$. \end{itemize} The dynamical time, or crossing time, is the time that a star needs to cross the cluster system. It indicates how long a system needs to establish or re-establish dynamical equilibrium. It is defined as: \begin{equation} \label{eq:t_dyn} t_{\rm dyn} = \sqrt{\frac{r_{\rm vir}^3}{GM_{\rm phot}}}. \end{equation} For the results of the four cases the dynamical time yields: 1.) $t_{\rm dyn,MC} = 0.3\,{\rm Myr}$ and $t_{\rm dyn,NC} = 1.6\,{\rm Myr}$; 2.) $t_{\rm dyn,Wd2} = 0.034\,{\rm Myr}$, 3.) $t_{\rm dyn,Wd2} = 4.64\,{\rm Myr}$, and 4.) $t_{\rm dyn,Wd2} = 0.11\,{\rm Myr}$. It becomes clear that the dynamical mass and time highly depend on the structure of the underlying system and the assumptions made to determine these parameters, which we will discuss in detail in Sect.~\ref{sec:discussion}. \section{The \textit{G\lowercase{aia}} DR2} \label{sec:Gaia} The Wd2 cluster and its parental \ion{H}{2} region RCW49 is being observed by the \textit{Gaia} satellite and the already collected data is part of the data release 2 \citep[DR2,][]{GaiaCollaboration2016,GaiaCollaboration2018}. Its location in the Carina-Sagittarius spiral arm and the extinction and crowding impose limitations to the DR2 accuracy. Hence, many cluster member parameters, such as stellar velocities, and parallaxes are still poorly constrained\footnote{For example, only three stars of the HST photometric catalog have \textit{Gaia} RVs (Paper 1)}. Nevertheless, we analyze the existing \textit{Gaia} stellar proper motions (pms) using priors based on the knowledge we gained from the HST and MUSE data. We cross-correlate the HST and the \textit{Gaia} catalogs. Of the 20,482 point sources in the HST catalog, 1239 are included in the \textit{Gaia} DR2, of which 471 are cluster member stars based on the HST CMD selection \citep{Zeidler2015,Sabbi2020}. To select stars with a clean astrometric solution we use the following magnitude based limits as suggested by \citet{Lindegren2018}: \begin{equation} \label{eq:clean_astrometric_solution_gaia} u < 1.2 \cdot \max{\left(1, \exp{\left(-0.2\cdot\left(G-19.5\right)\right)}\right)}, \end{equation} where $G$ is the \textit{Gaia} $G$-band magnitude and $u = \left(\chi^2 / \nu \right)^{\sfrac{1}{2}}$. $\chi$ is the astrometric goodness-of-fit in the ``along-scan'' direction and $\nu$ is the adjoined number of good observations. This leaves us with 282 cluster members, of which 85 also have MUSE RVs. We use the 282 cluster members to calculate the systemic pms in R.A. and Dec. $\mu_{\alpha \ast,{\rm sys}} = -5.17\,{\rm mas}\,{\rm yr}^{-1}$\footnote{$\mu_{\alpha \ast}$ is the deprojected, declination corrected pm in R.A.: $\mu_\alpha \cdot \cos{\delta}$} and $\mu_{\delta,{\rm sys}} = 3.00\,{\rm mas}\,{\rm yr}^{-1}$, which is $\mu_{\alpha \ast,{\rm sys}} = -101.9\,{\rm km}\,{\rm s}^{-1}$ and $\mu_{\delta,{\rm sys}} = 59.1\,{\rm km}\,{\rm s}^{-1}$ at the distance of Wd2 (4.16\,kpc). As for the cluster RVs (Sect.~\ref{sec:RV}) we subtract the systemic velocities from the cluster members throughout the rest of this work unless stated otherwise. To analyze the pm distributions we only use the 85 stars that have MUSE RVs. This allows us to differentiate between O-stars, B-stars, and PMS stars. For the pm distributions we use the same method as for the RVs (Sect.~\ref{sec:RV_stars}). The typical uncertainties are $\sigma \mu_{\alpha \ast} = 0.332\,{\rm mas}\,{\rm yr}^{-1}$ ($6.55\,{\rm km}\,{\rm s}^{-1}$) and $\sigma \mu_{\delta} = 0.329\,{\rm mas}\,{\rm yr}^{-1}$ ($6.49\,{\rm km}\,{\rm s}^{-1}$) for cluster members, and $\sigma \mu_{\alpha \ast} = 0.524\,{\rm mas}\,{\rm yr}^{-1}$ and $\sigma \mu_{\delta} = 0.542\,{\rm mas}\,{\rm yr}^{-1}$ for field stars. We show the field star distributions in the bottom panel of Fig.~\ref{fig:pm_dist} (R.A. in black and Dec. in green). The arrows indicate the respective cluster member systemic pms. Similar to the RVs (see Fig.~\ref{fig:RV_dist}) the velocity space of the field stars and the cluster members overlap. This means that also pms are not suitable to improve the cluster member selection. \begin{figure}[htb] \plotone{pm_dist.png} \caption{The \textit{Gaia} pm distributions of the stars toward the Wd2 cluster. The left two plots of the top panel show the pm distributions in R.A. and Dec. for all cluster member stars (black), O-stars (purple), B-stars (blue), and PMS stars (red) computed with the same technique as RV distribution (see Fig.~\ref{fig:RV_dist} and \ref{fig:RV_dist_grouped}). On the right we show the pm distribution of the five RV groups (see Fig.~\ref{fig:RV_dist_grouped}). The bottom panel shows the pm distributions of the foreground field stars (R.A. in black and Dec. in green). The arrows indicate the systemic pms of the Wd2 stars.} \label{fig:pm_dist} \end{figure} The pm profiles of the O-stars, B-stars, and PMS stars (top panels of Fig.~\ref{fig:pm_dist}) indicate a similar distribution as the RV profile. The O-stars and B-stars are centered around the $0\,{\rm km}\,{\rm s}^{-1}$ with the B-stars showing a slightly broader velocity profile. The distribution of the PMS is much broader covering more than $20\,{\rm km}\,{\rm s}^{-1}$. While they are also centered around the systemic pm in declination they appear to be slightly offset in R.A.. In both pm directions the PMS stars show two peaks at $\sigma \mu_{\alpha \ast} \approx 0\,{\rm km}\,{\rm s}^{-1}$ and $\sigma \mu_{\alpha \ast} \approx 12\,{\rm km}\,{\rm s}^{-1}$ and $\sigma \mu_{\delta} \approx -5\,{\rm km}\,{\rm s}^{-1}$ and $\sigma \mu_{\delta} \approx 8\,{\rm km}\,{\rm s}^{-1}$. The fairly high pm uncertainties (in comparison to the RVs) do not allow to resolve the five individual velocity groups detected with the RVs, assuming they have a similar separation and dispersion in pm direction. Additionally, the \textit{Gaia} DR2 data is less deep than the MUSE dataset, which may lead to a similar effect we saw in Paper 1 for the shallower MUSE sample. Here also only two RV peaks where detected ($\sim 17\,{\rm km}\,{\rm s}^{-1}$ apart). In the right top panels of Fig.~\ref{fig:pm_dist} we show the pm distribution of the five PMS RV groups. Their distributions indicate marginal relative shifts, yet the relatively large uncertainties and the low numbers (not all RV stars have pms) do not allow for a confirmation of the velocity groups. Future \textit{Gaia} data releases will provide the necessary precision. \section{High velocity runaway candidates} \label{sec:runaways} High stellar densities in YSCs (either naturally formed or through rapid mass-segregation), binaries, and higher-order systems increase the probability for close encounters within the cluster \citep[e.g., ][and reference therein]{PortegiesZwart2010}. These dynamical interactions and supernova explosions (if one binary component explodes) can give a star a ``kick'' making it a runaway star. Runaway events are believed to be the main source of populating the field with these massive objects \citep{Blaauw1961,PortegiesZwart2010}. In the MW, sources with a velocity $>30\,{\rm km}\,{\rm s}^{-1}$ relative to the local standard of rest are considered as runaway stars \citep{Hoogerwerf2001}. Recent studies found that a significant number of massive O and B-stars may have been ejected from YSCs including Wd2, NGC3603, and R136 \cite{Roman-Lopes2011,Lennon2018,Drew2018,Drew2019}. In both, the MUSE RV distribution and the Gaia DR2 pm distributions (see Fig.~\ref{fig:RV_dist_grouped} and \ref{fig:pm_dist}) we see bona-fide cluster member stars with velocities exceeding $\pm30\,{\rm km}\,{\rm s}^{-1}$. To analyze these runaway candidates we calculate their peculiar velocities based on the three velocity components (RV, $\mu_{\alpha \ast}$, $\mu_{\delta}$). We only consider those sources, whose total pm uncertainty do not exceed 50\% prior to the systemic velocity subtraction, which ensures that a proper direction of the stars' motion can be determined. The pm uncertainty ellipse is determined following eq.~(9) in \citet{Lindegren2016} and eq.~(B.2) in \citet{Lindegren2018}. Since we are only interested in the the relative motion of the stars, we do not consider any systematic offsets in the DR2 pms and parallaxes as they are described in \citet{Lindegren2018}. In total we find 22 stars that fulfill the above criteria, of which one is an O9.5V star\footnote{Based on the spectral type determined by \citet{VargasAlvarez2013} and confirmed in Paper 1.} (ID: 10198) and one is a B-type star (ID: 10048). The parameters of all 22 sources are listed in Tab.~\ref{tab:runaways}. In Fig.~\ref{fig:runaways} we show the location of the runaway candidates. The green arrows point in the direction of the stars movement in pm space, while the length of the vector represents the velocity. The ellipse at each vector's tip represents the pm error ellipse. The size of each point represents the magnitude of the RV (blue/red points indicate a relative RV toward/away from the Sun). The majority of runaway candidates show peculiar velocities in the range of 30--$100\,{\rm km}\,{\rm s}^{-1}$. Only three stars exceed this range: ID-13587\footnote{r.a.~$=10^\mathrm{h}24^\mathrm{m}07.91^\mathrm{s}$, dec.~$=-57^\circ45{}^\prime22.69{}^{\prime\prime}$} with $123.2 \pm 4.2\,{\rm km}\,{\rm s}^{-1}$, ID-16306\footnote{r.a.~$=10^\mathrm{h}24^\mathrm{m}11.74^\mathrm{s}$, dec.~$=-57^\circ45{}^\prime18.94{}^{\prime\prime}$} with $245.8 \pm 2.3\,{\rm km}\,{\rm s}^{-1}$, and ID-14542\footnote{r.a.~$=10^\mathrm{h}24^\mathrm{m}09.22^\mathrm{s}$, dec.~$=-57^\circ43{}^\prime57.67{}^{\prime\prime}$} with $546.1 \pm 5.3\,{\rm km}\,{\rm s}^{-1}$. \begin{figure}[htb] \plotone{runaways.png} \caption{The stars with an absolute peculiar velocity exceeding $30\,{\rm km}\,{\rm s}^{-1}$. The green arrows indicate the value and direction of the proper motions. At the arrow tip we show in green the pm error ellipses. The RVs are shown as red and blue circles depending if the stars move away from us or toward us. The circle size is an indicator for the RV value. For displaying proposes we cut the arrow of the star with a peculiar velocity of $546.1\,{\rm km}\,{\rm s}^{-1}$.} \label{fig:runaways} \end{figure} \section{Discussion} \label{sec:discussion} In the following we will discuss the results presented in this work on the internal dynamics of Wd2. \citet{Furukawa2009}, \citet{Ohama2010} and \citet{Fukui2016} argued, based on the results of NANTEN2 CO sub-millimeter observations, that cloud-cloud collision of two CO clouds at $4\,{\rm km}\,{\rm s}^{-1}$ and $16\,{\rm km}\,{\rm s}^{-1}$ may have triggered the formation of Wd2. Our RV analysis of the \ion{H}{2} region RCW49 surrounding Wd2 shows that its mean RV of $15.9\,{\rm km}\,{\rm s}^{-1}$ is in agreement with their conclusion. Furthermore, is the cavity, created by the ionizing fluxes of the many OB stars, expanding at a rate of $\sim 15\,{\rm km}\,{\rm s}^{-1}$ (see Fig.~\ref{fig:EBV_RVgas}). We must note that projection effects and the limited survey area may have an influence on that number, which explains the asymmetric RV distribution between the bottom and top 10\% of the velocity distributing ($\le -5.54\,{\rm km}\,{\rm s}^{-1}$ and $\ge 9.95\,{\rm km}\,{\rm s}^{-1}$, see Fig.~\ref{fig:EBV_RVgas_analysis}). Hence, the $\sim 15\,{\rm km}\,{\rm s}^{-1}$ should be considered as a lower limit. Nevertheless, the expansion rate of $\sim 7{\rm -}10\,{\rm km}\,{\rm s}^{-1}$ is comparable with studies of other \ion{H}{2} regions, such as N44 \citep[$\sim 6 {\rm -} 11\,{\rm km}\,{\rm s}^{-1}$,][]{Naze2002,McLeod2019}, N11 and N180 \citep[$\sim 10\,{\rm km}\,{\rm s}^{-1}$ and $10 {\rm -} 20\,{\rm km}\,{\rm s}^{-1}$, respectively,][]{Naze2001}, and other Magellanic Clouds, Milky Way, and extra galactic \ion{H}{2} regions \citep[e.g.,][]{Murray2009,Mesa-Delgado2010,McLeod2020}. These numbers are also supported by a variety of numerical models and simulations \citep{Osterbrock1989,Bertoldi1990,Fujii2016,Haid2018}. The stellar and gas velocities are uncorrelated. We conclude that the \ion{H}{2} region is dominated by feedback processes, such as stellar winds and radiation pressure \citep[see e.g.,][]{Dale2015a} and has lost the imprint of the original cloud collapse. In Paper 1 we demonstrated that cluster member stars show two distinct RV groups. The use of the short exposures only led to a smaller sample of stars with a cutoff at higher masses, which led to a bias toward more massive stars. Combining the short and long exposures and using the full capacity of \dataset[MUSEpack]{\doi{10.5281/zenodo.3433996}}~ allows us to conduct a more sophisticated study of the stellar RV distribution. The three main results of this analysis are: 1) stars of different masses show different RV distributions 2) the lower the stellar mass, the higher the velocity dispersion, and 3) the low-mass PMS stars show five distinct, spatially correlated RV groups. The distributions of the O and B stars (one and two peaks, see Fig.~\ref{fig:RV_dist_grouped}) are in agreement with the two RV peaks detected in \citetalias{Zeidler2018}. The overall smaller RV dispersion of the OB-stars is in good agreement with a highly mass-segregated star cluster. Two-body relaxation drives star clusters toward energy equipartition \citep[$m_i v_i^2 = \rm const. $, e.g.,][]{Spitzer1969, Parker2016}, which impacts high-mass stars faster and stronger. We also must note that, given the young age and the EFF profile being the best-fitted mass distribution it is almost impossible that Wd2 has reached energy equipartition. The low-mass PMS stars do not only show an overall higher velocity dispersion, they also belong to five distinct velocity groups (see Fig.~\ref{fig:RV_dist_grouped}). While PMS groups 2 -- 5 have very similar velocity dispersions ($\sim 2\,{\rm km}\,{\rm s}^{-1}$, see Tab.~\ref{tab:RV_stars_comp}), the velocity dispersion of PMS group 1 is with $5.18\,{\rm km}\,{\rm s}^{-1}$ much higher. The analysis of the spatial location of the stars in each velocity reveals a correlation. Always two groups (PMS3,PMS5 and PMS2,PMS4) coincide with the MC and NC. The stars of PMS group 1 are distributed throughout the cluster region in a halo-like structure with a higher concentration toward the center of Wd2. Observations and theoretical studies show that star and star cluster formation is a hierarchical process and that YSCs form through merging of smaller sub cluster \citep[e.g.,][]{McMillan2007a,Sabbi2007,Fujii2012,Banerjee2015,Fujii2016}. Given the age of Wd2 (1--2\,Myr) we conclude that the individual velocity groups are a remnant of the formation process of the MC and NC. While the feedback from the OB stars has destroyed this imprint in the \ion{H}{2} region the stars are much less affected by feedback processes. The fact that always two groups are co-located with each clump may either suggest that these clumps have been more sub structured or it is a residual from the possible cloud-cloud collision that initiated the formation of Wd2. The latter is supported by the fact the mean velocity differences of PMS2,PMS4 and PMS3,PMS5 are $(11.50 \pm 0.41)\,{\rm km}\,{\rm s}^{-1}$ and $(12.90 \pm 0.24)\,{\rm km}\,{\rm s}^{-1}$, respectively, which is in agreement with the velocity difference of the two CO clouds \citep[$4\,{\rm km}\,{\rm s}^{-1}$ and $16\,{\rm km}\,{\rm s}^{-1}$,][]{Furukawa2009}. Although the two clumps of Wd2 are coeval \citep{Zeidler2015}, its structure has similarities with the observations of a highly sub-structured ONC with several kinematically different stellar groups at different ages \citep{Zari2019a}. Star cluster populations throughout the Universe show a drop in number for YSCs (typically around $<10$\,Myr) compared to the older cluster population ($>100$\,Myr). This is often referred to as ``infant mortality'' \citep[e.g.,][]{Lada2003,Goodwin2006,PortegiesZwart2010} and means that there is a process that destroys the majority of YSCs ($\sim90\%$) within the first few 10\,Myr of their life. This typically happens because these YSCs are not massive enough to overcome internal (e.g., supernova explosions, rapid gas expulsion) and external (e.g., collisions and close encounters with GMC in the Galactic disk, changes in the external tidal field) evolution effects. In this work we solely focus on the internal processes and a detailed discussion on external factors can be found in e.g., \citet{Krumholz2019}. The absolute minimal mass necessary to keep a self-gravitating system bound is: $M_{\rm dyn} = M_{\rm phot}$. The non-spherical substructured nature of Wd2 makes this comparison challenging and we introduced four different cases. In case 1. we consider the two clumps as individual clusters and the resulting dynamical masses, $M_{\rm dyn,MC} = (1.9 \pm 0.5)\cdot10^3\,{\rm M}_\odot$ and $M_{\rm dyn,NC} = (3.3 \pm 1.4)\cdot10^3\,{\rm M}_\odot$ are smaller than the individual photometric masses \citep[$M_{\rm phot,MC} = (2.8 \pm 0.6)\cdot10^4\,{\rm M}_\odot$ and $M_{\rm phot,NC} = (4.2 \pm 1.3)\cdot10^3\,{\rm M}_\odot$,][]{Zeidler2017}. This is an unrealistic scenario and the close proximity of the two clumps suggests that they will merge in the near future supporting hierarchical formation of YSCs \citep[e.g.,][]{Fujii2012}. Therefore, we must analyze the cluster system as a whole. In case 2. we assume the MC and NC are located at the same position with $r_{\rm hm,Wd2} = (r_{\rm hm,MC} + r_{\rm hm,NC}) / 2$ and in case 3. the half mass radius incorporates both the MC and the NC ($r_{\rm hm,Wd2} = r_{\rm hm,MC} + r_{\rm hm,NC} + d({\rm MC},{\rm NC})$). Both cases lead to a dynamical mass ($M_{\rm dyn,Wd2} = (7.5 \pm 1.9)\cdot10^4\,{\rm M}_\odot$ and $M_{\rm dyn,Wd2} = (4.4 \pm 1.1)\cdot10^5\,{\rm M}_\odot$) that highly exceeds the Wd2 photometric mass. Even without any external perturbations this suggests that Wd2 will disperse in the future. This is supported by an unreasonably small half-mass radius ($r_{\rm hm} = (0.13\pm0.04)\,{\rm pc}$, in comparison the mass surface density, see Fig.~\ref{fig:spat_dist}) that is necessary for case 4, $M_{\rm dyn} = M_{\rm phot}$. Next we discuss the dynamical time scale. The ratio of the cluster's age with its dynamical time ($\Pi = {\rm age}/{t_{\rm dyn}}$) should be large if a system is bound, while for unbound systems it is expected to be small. A cut can be defined at $\Pi \sim 1$--3 but has to be used with caution when the ratio is close to this, somewhat arbitrary cut \citep{Pfalzner2009,Adamo2020}. For an age range of 1--2\,Myr for Wd2 \citep[e.g.,][]{Zeidler2015,Sabbi2020} this ratio is $\Pi = 29$--58, $\Pi = 0.22$--0.43, and $\Pi = 9$--18 for cases 2., 3., and 4., respectively. $\Pi = 9$--18 (case 4.) agrees with the value of $\Pi = 8.48$ shown in Tab.~2 of \citet{PortegiesZwart2010}, yet this value is based on a distance of 2.8\,kpc for Wd2 \citep{Pfalzner2009}. The somewhat high variation of this ratio does not allow a conclusive determination of whether Wd2 is bound. Yet, the analyses of the dynamical mass and its location in the Galactic disk lead to the conclusion that Wd2 will eventually dissolve. In addition to external perturbations, the many OB-stars in the cluster center will go supernovae, ejecting the remaining gas within the cluster. This will shock the cluster's gravitational potential leading to an almost instantaneous expansion \citep[see e.g.,][]{Goodwin2006}, which will accelerate this dispersion. While not all (massive) stars from in clusters \citep[e.g.,][]{DeWit2005,Ward2018}, high-velocity runaway stars are the main source for massive O and B field stars \citep[e.g.,][]{Blaauw1961,PortegiesZwart2010,Fujii2011,Oh2016}. In the larger vicinity ($1.5 \times 1.5\,{\rm deg}^2$) around Wd2, \citet{Drew2018} detected 8--11 early O and Wolf-Rayet stars that are Wd2 runaway candidates. In this work the combination of \textit{Gaia} DR2 proper motions and the high accuracy of the MUSE RVs allowed us to detect high-velocity runaway stars inside the cluster region. The majority of runaway candidates show peculiar velocities in the range of 30--$100\,{\rm km}\,{\rm s}^{-1}$ but three stars exceed this range: ID-13587 with $123.2 \pm 4.2\,{\rm km}\,{\rm s}^{-1}$, ID-16306 with $245.8 \pm 2.3\,{\rm km}\,{\rm s}^{-1}$, and ID-14542 with $546.1 \pm 5.3\,{\rm km}\,{\rm s}^{-1}$. We do not detect any preferred ejection direction (see Fig.~\ref{fig:runaways}), which is in agreement that these high stellar velocities are obtained through two-body interactions in the dense cluster center. Although the favored scenario for these ``kicks'' are supernovae explosions of the binary component \citep[e.g.,][]{Hoogerwerf2001}, we concur with \citet{Drew2018} that this scenario is very unlikely because Wd2 is too young. Especially the massive runaway stars are important to understand the initial mass function (IMF). While studies show that some YSCs, including Wd2 \citep{Zeidler2017}, show a top-heavy present day MF, YSCs that show a canonical IMF may have had a top-heavy IMF before the early ejections of some of their most massive members. \section{Summary and Conclusions} \label{sec:summary} The young age, the close proximity, and its spatial sub-structure make Wd2 an interesting target to study how YSCs form and evolve during their first few million years. In this third paper of this series, we used the unique capabilities of HST photometry, \textit{Gaia} pms, and VLT/MUSE RVs to analyze the internal kinematic structure of Wd2 with the following results: \begin{itemize} \item The current, already super-virial state of Wd2, the fact that the first supernovae are yet to happen, and its location in the MW disk, which make Wd2 prone to interact or even collide with other GMCs, led to the conclusion that the cluster is not massive enough to remain gravitationally bound. \item The cluster velocity dispersion increases with decreasing stellar mass, as expected from a highly mass-segregated cluster. \item The low-mass PMS stars have five distinct and statistically significant velocity groups. \item Always two velocity groups are part of each of the two clumps (MC and NC), while the fifth group is a halo-like structure, in agreement with the formation of star clusters through mergers. \item We detected 22 runaway candidates that may be kicked out of the cluster due to two-body interactions caused by the high stellar density in the cluster center. \item The \ion{H}{2} region that surrounds Wd2 is expanding, driven by the radiation pressure and FUV flux of the many cluster center OB stars. Any imprint of the original cloud collapse has been destroyed. \end{itemize} Although this analyses already provides a multi-dimensional picture, future data releases of the \textit{Gaia} mission and multi epoch HST observations to accurately measure stellar proper motions will allow truly 3D kinematic studies. \acknowledgments We thank S. Kamann for his continuous support in using \texttt{Pampelmuse}. We also thank N. Miles for his support with the parallelization of python scripts, specifically using \texttt{dask}, and T. Morishita for his MCMC support. We thank P. Sonnentrucker for fruitful and interesting scientific discussions. We also thank the anonymous referee for their help to improve this paper. P.Z. acknowledges support by the Forschungs- stipendium (ZE 1159/1-1) of the German Research Foundation. This work is partly supported by NASA through the NASA Hubble Fellowship grant HF2-26555 (AFM). This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. These observations are associated with program \#14807. Support for program \#14807 was provided by NASA through a grant from the Space Telescope Science Institute. This work is based on observations obtained with the NASA/ESA \textit{Hubble} Space Telescope, at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. \vspace{5mm} \facilities{HST(WFC3,ACS), VLT(MUSE), \textit{Gaia}} \software{Astropy \citep{Astropy2018}, Dask \citep{DaskDevelopmentTeam2016}, ESORex \citep{Freudling2013}, pyspeckit \citep{Ginsburg2011}, Matplotlib \citep{Hunter2007}, MUSEpack \citep{Zeidler2019}, MUSE pipeline \citep[v.2.8.1][]{Weilbacher2012,Weilbacher2015}, PampleMuse \citep{Kamann2013,Kamann2016}, pPXF \citep{Cappellari2004,Cappellari2017}}
1602.03028
\section{Introduction}\label{sec:introduction} Core-collapse supernovae (CCSNe) have been an active topic of research in theoretical and observational astrophysics for many decades. In recent years, hundreds of CCSNe have been discovered every year, mostly in distant galaxies \citep[e.g.,][]{Sako:2007ms,Leaman:2010kb}. However, these extragalactic CCSNe only provide a very indirect probe of the core-collapse process, and as a result, basic questions such as the mechanism by which CCSN explode still remains a mystery \citep[for reviews, see, e.g.,][]{Kotake:2005zn,Janka:2012wk}. CCSNe in our Milky Way and neighboring galaxies, while rare, provide the opportunity to study CCSNe with a suite of new observables, including neutrinos \citep[e.g.,][]{Scholberg:2012id}, gravitational waves \citep[GW; e.g.,][]{Ott09,Kotake13}, and nuclear gamma rays \citep[e.g.,][]{1987ApJ...322..215G,Horiuchi:2010kq}. These are poised to more directly reveal the secrets laying deep inside the stellar photosphere. \begin{figure*} \begin{center} \includegraphics[scale=0.65, bb=0 220 600 600]{./fig1.pdf} \caption{ Time sequence for neutrino (red lines for $\nu_e$ and $\bar{\nu}_e$ and magenta line for $\nu_x$; $\nu_x$ represents heavy lepton neutrino $\nu_\mu$, $\nu_\tau$, $\bar{\nu}_\mu$, or $\bar{\nu}_\tau$), GW (blue line), and electromagnetic (EM, black line) signals based on our neutrino-driven core-collapse simulation of a non-rotating $17 \, M_{\odot}$ progenitor. The solid lines are direct or indirect results of our CCSN simulation, whereas the dashed lines are from literatures or rough speculations. The left (right) panel $x$-axis shows time before (after) core bounce. Emissions of pre-CCSN neutrinos as well as the core-collapse neutrino burst are shown as labeled. For the EM signal, the optical output of the progenitor, the SBO emission, the optical plateau, and the decay tail are shown as labeled. The GW luminosity is highly fluctuating during our simulation and the blue shaded area presents the region between the two straight lines fitting the high and low peaks during 3 -- 5 seconds postbounce. The hight of the curves does not reflect the energy output in each messenger; total energy emitted after bounce in the form of anti-electron neutrino, photons, and GW is $\sim 6 \times 10^{52}$ erg, $\sim 4 \times 10^{49}$ erg, and $\sim 7 \times 10^{46}$ erg, respectively. See the text for details. } \label{fig:lumi} \end{center} \end{figure*} Neutrinos and GWs in particular provide unique probes of the explosion mechanism in realtime. Multiple neutrino detectors capable of detecting CCSN neutrinos are currently in operation. The best suited is \textit{Super-Kamiokande} (Super-K) which is expected to collect a rich dataset of neutrino events from future Galactic CCSNe, while IceCube also has equal statistical detection potential even though IceCube cannot resolve individual neutrino events. Smaller detectors with sensitivity to CCSN neutrinos include, e.g., Baksan, Borexino, DayaBay, HALO, KamLAND, LVD, MiniBooNE, and NO$\nu$A \citep[for their detection potentials, see, e.g., recent review][]{Mirizzi:2015eza}. In the near-future, the \textit{Jiangmen Underground Neutrino Observatory} \citep[JUNO,][]{Li:2014qca} will augment Super-K and IceCube, and with future experiments such as \textit{Hyper-Kamiokande} \citep[Hyper-K,][]{Abe:2011ts} and \textit{Deep Underground Neutrino Experiment} \citep[DUNE,][]{Acciarri:2015uup}, neutrino event statistics and neutrino flavor information will be dramatically improved. GW detectors such as Advanced LIGO (aLIGO), Advanced Virgo (adVirgo), and KAGRA are expected to be able to detect CCSN GW out to a few kpc from the Earth, while future detectors such as the \textit{Einstein Telescope} (ET) can reach the entire Milky Way. In order to exploit these potentials, a multi-messenger observing strategy is necessary. In this context, the neutrino signal is particularly important. The neutrino emission in fact starts before the core collapse even begins. Neutrinos emitted during the final states of silicon burning can reach $\sim 5 \times 10^{50}$ erg for a massive star \citep{arnett89}, which can be detected by Hyper-K out to a few kpc away \citep{odrzywolek04}, thereby providing an early warning signal. During the first $\sim 10$ seconds after the core collapse, a copious $\sim 3\times 10^{53}$ erg of energy is emitted as neutrinos as was confirmed in SN~1987A \citep{hirata1987,Bionta:1987qt,Sato-and-Suzuki}. In addition to signaling unambiguously the occurrence of a nearby core collapse, the detected neutrinos will point to the location of the core collapse within an error circle of a few to ten degrees in the sky \citep{Beacom:1998fj,Tomas:2003xn,Bueno:2003ei}. This pointing information is particularly important for electromagnetic signals, which remain a crucial component of studies of CCSNe in the Milky Way and nearby galaxies. A few hours to days after the core collapse, the supernova shock breaks out of the progenitor surface, suddenly releasing the photons behind the shock in a flash bright in UV and X-rays, known as shock breakout (SBO) emission \citep{matzner99,blinnikov2000,tominaga2009,gezari2010,Kistler:2012as}. Although the SBO signal provides important information about the CCSN, such as the radius of the progenitor, detection is difficult because of its short duration. Knowing where to anticipate the signal will dramatically improve its detection prospects. In addition to the SBO, more traditional studies of CCSN properties (e.g, energy, composition, velocity) and its progenitor are important diagnostics of a CCSN, and a well-observed early light curve is important for accurate reconstruction of the CCSN evolution \citep[e.g.,][]{tominaga11}. \begin{table*} \begin{center} \caption{Detectable signals, detectors, and their horizons}\label{tbl:summary} \begin{tabular}{clcccccccc} \hline & & \multicolumn{2}{c}{Extremely nearby event @ $O$(1 kpc)} & & \multicolumn{2}{c}{Galactic event @ $O$(10 kpc)} & & \multicolumn{2}{c}{Extragalactic event @ $O$(1 Mpc)} \\ & & \multicolumn{2}{c}{(see Section \ref{sec:near})} & & \multicolumn{2}{c}{(see Section \ref{sec:galactic})} & & \multicolumn{2}{c}{(see Section \ref{sec:extra})}\\ \cline{3-4} \cline{6-7} \cline{9-10} \multicolumn{2}{c}{signals} & detector & horizon && detector & horizon && detector & horizon\\ \hline neutrino & pre-SN $\bar{\nu}_{\rm e}$ & KamLand & $< 1$ kpc && --- &&& ---\\ & & {\it HK (20XX-)} & $<3$ kpc \\ & $\bar{\nu}_{\rm e}$ burst & SK & Galaxy$^*$ && SK & Galaxy && {\it HK} & $<$ a few Mpc\\ & $\bar{\nu}_{\rm e}$ burst & {\it JUNO (201X-)} & Galaxy && {\it JUNO} & Galaxy && ---\\ & ${\nu}_{\rm e}$ burst & {\it DUNE (20XX-)} & Galaxy && {\it DUNE} & Galaxy && ---\\ \hline GW & Waveform$^\dagger$ & H-L-V-K$^\ddagger$ & $<$ several kpc \\ & Detection & & && H-L-V-K & $\lesssim 8.5$ kpc && {\it ET (20XX-)} & $\lesssim 100$ kpc\\ \hline EM & Optical & $< 1$ m class &&& 1--8 m class$^{**}$ &&& $< 1$ m class\\ & NIR & $< 1$ m class &&& $< 1$ m class &&& $< 1$ m class \\ \hline \multicolumn{10}{l}{$^*$Detectable throughout the Galaxy.}\\ \multicolumn{10}{l}{$^{**}$ $\sim 25$\% of SNe are too faint to be detected. (Section \ref{sec:galmm}, see also Figure \ref{fig:opticaldetection})}\\ \multicolumn{10}{l}{$^\dagger$Waveform means detection with sufficient signal-to-noise to unravel the GW waveform.}\\ \multicolumn{10}{l}{$^\ddagger$A network of aLIGO Hanford and Livingston, adVirgo, and KAGRA (Section \ref{sec:detector}).}\\ \end{tabular} \end{center} \end{table*} Already, various aspects of multi-messenger physics of Galactic and nearby CCSNe have been investigated. For example, signal predictions of neutrino and GW messengers have been investigated by many authors. In particular, the first $\sim 500$ milliseconds following core collapse is thought to be critical for a successful explosion, and has been studied with three dimensional hydrodynamics and three-flavor neutrino transport by various authors \citep[e.g.,][]{Kuroda:2012nc,Ott:2012mr,Hanke13,Tamborra:2014hga}. On the other hand, the long-term neutrino emission characteristics have been investigated by several groups based on spherically symmetric general relativistic simulations using spectral three-flavor Boltzmann neutrino transport \citep[e.g.,][]{fischer10}. Similarly, while there are a number of multi-dimensional core-collapse simulations with GW predictions \citep[e.g.,][]{EMuller12,Kuroda14,yakunin15}, detailed detectability studies have been limited \citep[see, however,][]{hayama15,Gossan:2015xda}. \citet{leonor10} have discussed utilization of a joint analysis of GW and neutrino data, particularly for electromagnetically dark (or failed) CCSNe. The importance of the SBO and their connections to multi-messenger observations have been investigated by, e.g., \cite{Kistler:2012as}. Recently, \cite{adams13} revisited the investigation of dust attenuation of Galactic CCSNe. Using modern models of the Galactic dust distribution, they present the distributions of the observed V-band and near-infrared (NIR) band magnitudes of Galactic CCSNe. They also emphasize the importance of neutrino warning and pointing, as well as the need for a wide-field IR detector for ensuring the detection of the early CCSN light curve. In this paper, we revisit the implementations of multi-messenger probes of CCSNe. We improve upon previous studies in several ways. First, we self-consistently determine predictions of neutrinos, GW, and electromagnetic signals from a CCSN based on a long-term two-dimensional axisymmetric simulation (\citealt{nakamura15}; Nakamura et al.~in preparation). To focus on the most common Type IIP supernova, we adopt a non-rotating $17 M_\odot$ red supergiant star with solar metallicity. The simulation gives neutrino and GW signals for the first $\sim 7$ seconds after core bounce. We use the ensuing physical parameters (the radius and mass of the central remnant, the mass of synthesized nickel, and the explosion energy) to estimate the electromagnetic light curve. The time-sequence of multi-messenger signals is summarized in Figure \ref{fig:lumi}. The pre-CCSN neutrino emission of \cite{odrzywolek04}, the neutrino burst and gravitational energy release predicted directly from the numerical simulation, and the analytic bolometric light curve of SBO, plateau, and tail signals are shown and labeled. The neutrino burst luminosity is extrapolated up to 200 seconds based on the gravitational energy release rate from the shrinking protoneutron star. The GW energy emission rate is estimated using a quadrupole formula \citep{finn90,muellerb13}. In addition, we investigate tasks aimed at aiding the prospects of ensuring multi-messenger signal detections of a future CCSN. For this purpose we separately consider CCSN occurring in three distance regimes: we consider a CCSN occurring extremely nearby (less than 1 kpc away), at the Galactic Center ($\sim 10$ kpc, the most likely distance for a Galactic CCSN), and a CCSN in neighboring galaxies (within several Mpc). Although neutrino detection can unambiguously indicate a core-collapse event, prompt and precise pointing information is needed for the electromagnetic community to fully exploit the advanced warning. In cases where this fails, either due to the electromagnetic signal appearing too quickly or the neutrino statistics limiting pointing precision, telescopes could resort to searching a precompiled list of potential targets. This is particularly useful in the case of CCSNe in nearby galaxies, whose expected neutrino event rates do not provide a compelling angular pointing. To these ends, we compile a list of known Galactic red supergiant candidates in the vicinity of the Earth, as well as a list of nearby galaxies with estimates of their CCSN rates. The detectability of multi-messenger signals are summarized in Table \ref{tbl:summary}. The rows show the multi-messenger signals, while the columns show three distance regimes. Names in \textit{italic} denote future detectors. For neutrinos, only the dominant channels are shown: $\bar{\nu}_e$ for Super-K, Hyper-K, and JUNO, and $\nu_e$ for DUNE. The detectors however have other channels that allow further spectral and flavor information to be extracted \citep[for recent discussions, see, e.g.,][]{Laha:2013hva,Laha:2014yua}. In particular, the neutral current events at liquid scintillator detectors such as JUNO hold the important capability to measure the heavy lepton flavor neutrinos \citep{Beacom:2002hs,Dasgupta:2011wg}. One important thing is enhancement of detectability by combining the signals. For example, we demonstrate that the information of the core bounce timing provided by neutrinos can be used to improve the sensitivity of GW detection. Importantly, this increases the GW horizon from some $\sim 2$ kpc to $\sim 8.5$ kpc (based on our numerical model), which opens up the Galactic Center region to GW detection even for non-rotating progenitors (GW signals from collapse of rapidly rotating cores are circularly polarized \citep{hayama16} and significantly stronger \citep[e.g.,][]{Kotake13}). The paper is organized as follows. In Section \ref{sec:setup}, we summarize our setup. We describe our core-collapse simulation, methods for calculating multi-messenger signals, and summarize the detectors we consider and the method for determining signal detections. We discuss the case of a CCSN in the Galactic Center in Section \ref{sec:galactic}, the case of an extremely nearby CCSN in Section \ref{sec:near}, and the case of a CCSN in neighboring galaxies in Section \ref{sec:extra}. Sections \ref{sec:galactic}, \ref{sec:near}, and \ref{sec:extra} are all similarly organized in the following way: descriptions of the multi-messenger signals separately, followed by a discussion of the merits and the ideal procedures for their combination. In section \ref{sec:summary}, we conclude with an overall discussion and summary of our results. \section{Setup} \label{sec:setup} In this section we describe the setup of exploring multi-messenger signals from CCSNe. We first describe the setup of our numerical CCSN calculation, followed by how neutrino, GW, and optical signals are calculated. We then discuss multi-messenger detector considerations. \subsection{Supernova model} \label{sec:models} The basis of our CCSN model is a long-term simulation of an axisymmetric neutrino-driven explosion initiated from a non-rotating solar-metallicity progenitor of \citet{woosley02} with zero-age main sequence (ZAMS) mass of $17 ~ M_{\odot}$. This progenitor model has a radius $958 ~ R_{\odot}$ and mass $13.8 ~ M_{\odot}$ at the onset of gravitational collapse of the iron core. This progenitor retains its hydrogen envelope and is classified as a red supergiant star (RSG). It is therefore expected to explode as Type II supernova. The numerical code we employ for the core-collapse simulation is the same as found in \citet{nakamura15}, except for some minor revisions. The spatial range considered in \citet{nakamura15} was limited to 5,000 km from the center. For the current study, we extend this to 100,000 km in order to study the late-phase evolution. This outer boundary corresponds to the bottom of the helium layer, and the interior includes $4.1 \, M_{\odot}$ of the progenitor. The model is computed on a spherical coordinate grid with a resolution of $n_r \times n_\theta = 1008 \times 128$ zones. \begin{figure} \begin{center} \includegraphics[width=1.0\textwidth, bb=0 0 600 260]{./fig2.pdf} \caption{ Time evolution of shock radius (top panel) and explosion energy (bottom panel). For shock radius, the maximum, mean, and minimum are shown as separate lines from top to bottom. } \label{fig:mshell} \end{center} \end{figure} For electron and anti-electron neutrinos, we use the isotropic diffusion source approximation \citep[IDSA,][]{idsa}, taking 20 energy bins with an upper bound of 300 MeV. Regarding heavy-lepton neutrinos, we employ a leakage scheme. In the high-density regime, we use the Equation of State (EOS) of \citet{lattimer91} with a nuclear incompressibility of $K = 220$ MeV. At low densities, we employ an EOS accounting for photons, electrons, positrons, and ideal gas contribution from silicon. During our long-term CCSN simulation, we follow explosive nucleosynthesis by solving a simple nuclear network consisting of 13 alpha-nuclei. Feedback from the composition change to the EOS is neglected, whereas the energy feedback from the nuclear reactions to the hydrodynamic evolution is taken into account, as in \citet{nakamura14a}. Figure \ref{fig:mshell} shows time evolution of shock radius, $R_{\rm sh}$, and explosion energy, $E_{\rm exp}$. Here the explosion energy is estimated by adding up the kinetic, internal, and gravitational energies in all zones where the sum of these energies and radial velocity are positive. This CCSN model successfully revives its shock at 296 ms after bounce and the shock reaches the bottom of the helium layer at the end of our simulation at 6.77 s. At the termination of the simulation, the explosion energy is $1.23 \times 10^{51}$ erg and the mass of $^{56}{\rm Ni}$ in unbound material (summed up over the zones in the same manner as $E_{\rm exp}$) is $0.028 \, M_{\odot}$. Figure \ref{fig:snap} visualizes the entropy distribution at four time snap shots. The formation of hot bubbles (the regions colored by red in top left panel of Figure \ref{fig:snap}) is clear evidence of neutrino-driven convection, which revives the stalled bounce shock by the (so-called) neutrino-driven mechanism \citep{bethe90}. In our model, the runaway expansion of the stalled shock initiates at about 200 ms postbounce. In Figure \ref{fig:snap}, the shock front (white contours) exhibits a dipolar deformation along the poles, as previously identified in two-dimensional (2D) simulations \citep{bruenn13,summa15}, with the average shock radii becoming bigger with time (from top left to bottom right panel). After the shock passes the carbon core, the shape of the bipolar explosion (occurring stronger towards the north pole than the south pole, as seen in the bottom right panel of Figure \ref{fig:snap}) is kept nearly unchanged, and the shock expands rather in a self-similar fashion from thereon. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth, bb= 90 0 500 500]{./fig3.pdf} \caption{ Entropy distributions in units of $k_{\rm B}$ per baryon for the CCSN model at four selected postbounce times indicated in the bottom right corners. The white contour represents the shock front. Note the different scales in each panel. } \label{fig:snap} \end{center} \end{figure} \subsection{Limitations of our model} \label{sec:limit} Throughout this paper we use the CCSN model described in the above section. The properties of CCSN model, however, depend on the structure of progenitor stars as well as input physics in simulations. Neutrino luminosity, for example, changes for different mass (or ``compactness'') of progenitors and different choice of EOS \citep{oconnor13,nakamura15}. Our simulations for 101 progenitor models with solar metallicity, employing the same numerical scheme and EOS as the current study, show diverse neutrino luminosities; $1.0$ -- $3.5 \times 10^{52}$ erg s$^{-1}$ at 0.5 s for anti-electron neutrino, $17 ~ M_{\odot}$ model lying in between ($1.7 \times 10^{52}$ erg s$^{-1}$). Note that the mass of the progenitor ($17 ~ M_{\odot}$) is not extreme and the EOS we use is widely accepted to CCSN modelers. Numerical results also depend on unphysical variables, such as spatial resolution of simulations. We have tested the effects of angular resolution of spatial grid on CCSN properties. Besides a model with fiducial resolution of 128 angular zones, we used three additional models having 192, 256, and 320 angular zones. The numerical setup of the models was identical to the description in section \ref{sec:models}. For these models, differences in the shock dynamics as well as CCSN properties like explosion energy were observed as expected. Explosion energy, gravitational mass of PNS, and nickel mass in unbound material, estimated at 5.5 s after bounce, range in $E_{\rm exp} = 0.42$ -- $0.98 \times 10^{51}$ erg, $M_{\rm PNS} = 1.78$ --$1.91 \, M_{\odot}$, and $M_{\rm Ni} = 0.013$ -- $0.025 \, M_{\odot}$, respectively. Our resolution study does not show any converging results and the resolution changes do not produce uniform trend, which is consistent with recent 2D results by \citet{summa15}. It implies the strong influence of stochastic turbulent motions behind the shock front and difficulty in predicting any definitive CCSN properties given a progenitor structure. What we present in this paper is just one example of simulated CCSNe, but it is quite important to discuss multi-messenger signals focusing on one concrete example modeled by the state-of-the-art calculation. \subsection{Multi-messenger emissions} \label{sec:multimessengeremission} We study three kind of multi-messenger signals from the CCSN: neutrinos, GWs, and electromagnetic waves. Below, we briefly summarize the modeling of each signal, keeping the details in Appendix \ref{sec:appsignal}. {\it Neutrinos}: Neutrino signals are a direct outcome of our CCSN simulation which takes account neutrino transport in a self-consistent way. After emission from the collapsed core, neutrinos undergo flavor conversions during propagation to the Earth. In general, neutrino oscillation varies during the core-collapse evolution, and also depends on the neutrino mass hierarchy. Additional flavor mixing is induced by the coherent neutrino-neutrino forward scattering potential \citep[for a review, see, e.g.,][]{Duan:2010bg,Mirizzi:2015eza}, and exact predictions of collective flavor transformation in the post-bounce environment, and when it will apply, and at what energies, are still under debate. Therefore, we consider as a simple scenario the Mikheyev-Smirnov-Wolfenstein (MSW) matter effects in the normal and inverted mass hierarchy, as well as a no mixing and a full flavor swap scenario. While these scenarios are not equality realistic, they are aimed to cover the extremes (see Appendix \ref{sec:appnu} for details). {\it Gravitational waves}: We extract the GW signal from our simulations using the standard quadrupole formula \citep[e.g.,][]{Misner:1974qy}. For numerical convenience, we employ the so-called first-moment-of-momentum-divergence (FMD) formalism proposed by \cite{finn90} (see also modifications in \citet{Murphy:2009dx,muellerb13}). The angle between the symmetry axis in our 2D simulation and the line of sight of the observer is taken to be $\pi/2$ to maximize the GW amplitude (see Appendix \ref{sec:appgw} for details). {\it Electromagnetic signals}: The first electromagnetic signal from the CCSN is the emission from SBO, followed by the characteristic plateau phase for Type IIP supernovae. Given the large difference in time scales involved in our CCSN model ($\sim 7$ s) and the emergence of the SBO signal (a few minutes to a day), it is difficult for the current simulations of core collapse to directly reproduce the electromagnetic signals. Therefore, we use the final physical parameters of our simulation and employ analytic expressions for the luminosity and duration in SBO \citep{matzner99}, and in the plateau phase \citep{popov93} (see Appendix \ref{sec:appem} for details). \subsection{Detector considerations} \label{sec:detector} Below, we summarize the detectors setup assumed in this paper. In this paper, we will consider CCSNe at three distance regimes: occurring at the Galactic Center (section \ref{sec:galactic}), a very nearby location (section \ref{sec:near}), and extragalactic distances (section \ref{sec:extra}). As we discuss in the subsequent sections, the necessary detector depends strongly on these distance regimes. {\it Neutrinos}: We consider two water \v{C}erenkov neutrino detectors, Super-K\footnote{http://www-sk.icrr.u-tokyo.ac.jp/sk/index-e.html} and its successor Hyper-K \citep{Abe:2011ts}. Super-K has a fiducial volume of 32.5 kton in neutrino burst mode with a low energy threshold of 3 MeV. For Hyper-K, we assume a burst-mode fiducial volume of 740 kton and an energy threshold of 7 MeV \citep{Abe:2011ts}. The main detection channel is of $\bar{\nu}_e$ with inverse-beta decay (IBD), which provides good kinematic but poor angular information of the incoming neutrinos \citep{Vogel:1999zy}. We thus also consider electron scattering, which is sensitive to all neutrino flavors and is forward peaked \citep{Vogel:1989iv}. Details are included in Appendix \ref{sec:appnu}. {\it Gravitational waves}: We consider a network of interferometric GW detectors consisting of four detectors: aLIGO\footnote{https://www.advancedligo.mit.edu} Hanford and Livingston, adVirgo\footnote{http://wwwcascina.virgo.infn.it/advirgo}, and KAGRA\footnote{http://gwcenter.icrr.u-tokyo.ac.jp/en} at their geophysical locations (H-L-V-K). For each detector the noise spectrum is taken from \citet{ligodesign}, \citet{advv}, and \citet{aso13}, and we assume sensitivities that are expected to be realized by the year 2018. The detection of the gravitational waveform from a CCSN is determined using this global network of GW detectors by the coherent network analysis \citep{Klimenko:2005,2010NJPh...12e3034S, hayama07, hayama15}. The coherent network analysis was introduced by \citet{1989PhRvD..40.3884G}, the basis of which is to reconstruct arbitrary gravitational waveforms by linear combination of data from GW detectors. For the signal detection, reconstruction, and source identification, we perform Monte Carlo simulations using the {\tt RIDGE} pipeline \cite[see][for more details]{hayama07}. {\it Electromagnetic signals}: We consider electromagnetic observations in optical and NIR wavelengths. For detection of the electromagnetic signals, interstellar extinction proves to be critical \citep{adams13}. Since NIR wavelengths are only mildly affected by the extinction, Galactic CCSNe can be detected with small-size NIR telescopes. On the other hand, the optical brightness is significantly attenuated by dust extinction, which depends on the location of the CCSN event. In addition, since the positional localizations by GW and neutrino detections are typically larger than a few degree, wide-field capability is also crucial not to miss the very first electromagnetic signals. Therefore, for optical telescopes, we consider various sizes of wide-field telescopes/facilities, such as the All-Sky Automated Survey for SuperNovae \citep[ASAS-SN,][]{shappee14}, Evryscope \citep{law15}, Palomar Transient Factory \citep[PTF,][]{law09,rau09}, Pan-STARRS1 \citep[PS1, e.g.,][]{kaiser10}, Subaru/Hyper Suprime-Cam \citep[HSC,][]{miyazaki06,miyazaki12}, and Large Synoptic Survey Telescope \citep[LSST,][]{ivezic08}. \section{Galactic Center Supernovae} \label{sec:galactic} \subsection{Neutrino signal} \label{sec:galnu} A Galactic CCSN, occurring at a distance of $8.5$ kpc, will provide a rich trove of neutrino data in present and future neutrino detectors \citep[for a recent review, see, e.g.,][]{Mirizzi:2015eza}. The observed event rate in the detector is, \begin{equation} \frac{dN_e}{dT_e} = N_t \int^{\infty}_{E_{\rm min}} d E_\nu \frac{dF_\nu}{dE_\nu}(E_\nu) \frac{d \sigma}{dT_e}(E_\nu,T_e), \end{equation} where $N_t$ is the number of appropriate targets, $d\sigma/dT_e(E_\nu,T_e)$ is the differential cross section for the appropriate channel, and $dF_\nu/dE_\nu (E_\nu)$ is the neutrino flux observed at the Earth for the appropriate flavor. The threshold neutrino energy $E_{\rm min}$ is determined by the threshold energy of a lepton that can be detected. The neutrino flux observed at the Earth is, \begin{equation} \frac{dF_\nu}{dE_\nu}(E_\nu) = \frac{L_\nu}{4 \pi D^2 \langle E_\nu \rangle } f(E_\nu), \end{equation} where $L_\nu$ is the neutrino luminosity, $\langle E_\nu \rangle$ is the mean neutrino energy, $D$ is the distance to the CCSN, and $f(E_\nu)$ is the neutrino spectral distribution. For the latter we adopt the pinched Fermi-Dirac spectrum \citep{Keil:2002in}, \begin{equation} f(E_\nu) = \frac{(1+\alpha)^{(1+\alpha)} }{ \Gamma(1+\alpha) } \frac{E_\nu^\alpha}{\langle E_\nu \rangle^{\alpha+1}} e^{-(1+\alpha) \frac{E_\nu} { \langle E_\nu \rangle}}, \end{equation} where $\alpha$ describes the pinching given by $\alpha = (\varepsilon_2 - 2 \varepsilon_1^2) / (\varepsilon_1^2 - \varepsilon_2)$ using $\varepsilon_n = \int^\infty_0 \varepsilon^n f(\varepsilon) \, d\varepsilon$; $\alpha = 2.3$ corresponds to a Fermi-Dirac distribution. This spectral form is a good fit to the neutrino spectra of CCSN simulations \citep[e.g.,][]{Tamborra:2012ac}. \begin{figure} \begin{center} \includegraphics[width=0.2\textwidth, bb=200 50 500 570]{./fig4.pdf} \caption{Expected neutrino events at Hyper-K in 1ms time bins, for a Galactic CCSN at a distance of 8.5 kpc. Inverse-beta events are shown by thick blue lines, while the lower thin red lines denote electron scattering events. For each, the three lines show three neutrino mixing scenarios: (i) MSW mixing (central solid lines), (ii) no mixing (upper lines), and (iii) full flavor swap (lower lines). This is done to show the range of possible mixing, accounting for both mass hierarchies, MSW, and collective neutrino oscillations. The MSW mixing line represents normal (inverted) mass hierarchy for $\bar{\nu}_e$ ($\nu_e$). The no mixing assumes the observed electron neutrinos are entirely composed of the same flavor at source (i.e., $\bar{\nu}_e = \bar{\nu}_e^0$ and ${\nu}_e = {\nu}_e^0$), while the full swap assumes they are entirely exchanged with the heavy flavor states at source (i.e., $\bar{\nu}_e = {\nu}_x^0$ and ${\nu}_e = {\nu}_x^0$); see appendix \ref{sec:appnu} for details. The inset shows the first 40 ms of the IBD signal at Hyper-K, shown in 2 ms bins, for the MSW mixing case. The error bars are root-N Poisson errors. } \label{fig:neutrino} \end{center} \end{figure} We obtain the values of $(L_\nu,\langle E_\nu \rangle, \alpha)$ for $\nu = (\nu_e, \bar{\nu}_e, \nu_x)$ from our core-collapse simulation\footnote{Except $\alpha$ for $\nu_x$. The leakage scheme is adopted for $\nu_x$ and spectrum information is not available. Here we fix $\alpha = 2.3$ for $\nu_x$.}, and show the event rate per 1 ms bin expected for a 8.5 kpc event at the Hyper-K detector in Figure \ref{fig:neutrino}. The thick blue and thin red lines denote inverse beta and $e^-$-scattering events, respectively. The central solid lines denote MSW mixing, while the upper and lower lines denotes the no-mixing and full-mixing cases, respectively. The time-integrated number of events is 328,000--329,000 inverse-beta events and 11,300--11,700 $e^-$ scattering events. \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.1\textwidth, bb=770 0 970 800]{./fig5a.pdf} & \includegraphics[width=0.1\textwidth, bb=0 0 200 800]{./fig5b.pdf} \end{tabular} \caption{ The GW characteristics in the first 60 ms postbounce. Left: the inputted (solid red line) and reconstructed (dashed blue) gravitational waveform. Right: the spectrogram of the reconstructed waveform in the frequency window [50, 500] Hz. Both panels are for a CCSN at a distance of 8.5 kpc. } \label{fig:tfgc} \end{center} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.1\textwidth, bb=770 0 970 730]{./fig6a.pdf} & \includegraphics[width=0.1\textwidth, bb=0 0 200 730]{./fig6b.pdf} \end{tabular} \caption{ SNR of the GW from a distance of 8.5 kpc estimated in time-frequency pixels. Left: analysis based on a GW search over more than 1 second without a neutrino trigger. Right: SNR in the small time-frequency window with the aid of the neutrino timing information, corresponding to the right panel of Figure \ref{fig:tfgc}. Note the different scale between the left and right panels. } \label{fig:gcsnr} \end{center} \end{figure*} The high-statistics light curve will enable various studies, such as probes of the standing accretion shock instability \citep[SASI; see, e.g.,][for a recent review]{Foglizzo:2015dma} that affect the luminosities and energies of the emitted neutrinos. The detectability of SASI has been explored by various studies focusing on both Hyper-K and IceCube for Galactic CCSNe \citep{Lund:2010kh,Lund:2012vm,Tamborra:2013laa,Tamborra:2014hga}. Our particular progenitor explodes predominantly by neutrino-driven convection and hence the SASI modulated signal is weak. The neutronization burst will also allow unique probes of the core collapse \citep{Kachelriess:2004ds}. During this phase, the collapse is spherical enough that the bulb model, i.e., neutrinos emitted from a sharp spherical surface, is a reasonable approximation for neutrino mixing calculations \citep[e.g.,][]{Cherry:2012zw}. The time of core bounce can be estimated from the detected neutrinos. The inset of Figure \ref{fig:neutrino} shows the initial 40 ms of the IBD signal, where the error bars show the root-${N}$ statistical error. The statistics will be sufficiently high that the bounce time can be estimated with high accuracy. The IceCube detector will similarly have high significance detection of the $\bar{\nu}_e$ light curve through their IBD on free protons in ice \citep{Dighe:2003be,Kowarik:2009qr}. While individual neutrino events cannot be reconstructed at IceCube, a statistically significant ``glow'' is predicted during the passage of the supernova neutrino burst. For a Galactic supernova at 8.5 kpc, the bounce time is estimated to be measurable to within $\pm 3.0$ ms at 95\% confidence level \citep{Halzen:2009sm}. Neutrinos also provide pointing information. Using forward $e^-$ scattering events, a galactic core-collapse event can be pointed to within an error circle of some $6^\circ$ with the current Super-K detector \citep{Beacom:1998fj}. The main background is the near isotropic IBD signal. Coincidence tagging is a powerful way of distinguishing IBD from other events and backgrounds. With the current setup using pure water, the neutrons from IBD capture on protons yielding a 2.2 MeV gamma which falls below the lowest energy trigger threshold at Super-K. Using a forced trigger, tagging efficiencies of $\sim 20$\% have been obtained for diffuse supernova neutrino searches \cite{Zhang:2013tua}. Upon completion of its Gadolinium upgrade, Super-K is expected to be able to neutron-tag the IBD with $\sim 90$\% efficiency \citep{Beacom:2003nk}. The increased tagging efficiency will improve the pointing accuracy to $\sim 3^\circ$ at Super-K, and if Hyper-K similarly has 90\% tagging efficiency, to some $0.6^\circ$ \citep{Tomas:2003xn}. The DUNE detector is expected to provide comparable sensitivity. Scaling the results of \cite{Bueno:2003ei} to the 34 kton detector volume of DUNE and to our adopted 8.5 kpc distance, the expected uncertainty in supernova pointing is $\sim 1.5^\circ (34 {\rm \, kton} / 1.2 {\rm \, kton})^{-1/2} \sim 0.3^\circ$. As will be discussed below, the timing and pointing accuracy directly impacts the scientific benefits for multi-messenger studies of CCSNe. \subsection{Gravitational waves} \label{sec:galgw} In this section we discuss the detection prospects of GW signals from a CCSN occurring at a position close to the Galactic Center. We assume the GW source position at right ascension 17.76 hours and declination -27.07 degrees, and similar to the previous section a distance from the Earth of 8.5 kpc. The event time is set to be UTC 2013-02-22 12:12:02, but it is not important for detection efficiency of the detectors under consideration \citep{hayama15}. The gravitational waveform we employ as an input in this work is consistent with that obtained in previous GW studies \citep{Emuller04,muellerb13,yakunin15} based on self-consistent 2D models. The overall waveform (top panel of Figure \ref{fig:reconBetel1}) is characterized by a sharp rise shortly after bounce ($\lesssim$ 50 ms postbounce) due to prompt convection, which is followed by spikes with increasing GW amplitudes (maximally $6 \times 10^{-20}$ in Figure \ref{fig:reconBetel1}) in the non-linear phase due to neutrino-driven convection (and SASI) ($\lesssim$ 600 ms postbounce for this model). Later on, as the neutrino-driven wind phase sets in (typically $\gtrsim 1$ s postbounce), and the amplitude shows a gradual decrease. \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\linewidth, bb=0 0 200 150]{./fig7a.pdf} & \includegraphics[width=0.48\linewidth, bb=0 0 200 150]{./fig7b.pdf} \end{tabular} \end{center} \caption{ Optical (left panel) and NIR (right panel) signals from a Galactic CCSN at a distance of 8.5 kpc. The solid (blue) lines show schematic light curves of the plateau and tail phases based on the bolometric luminosity models given by \citet{popov93} and \citet{Nadyozhin94}. The expected light curves taking dust extinction into account are shown by dashed (red) lines with a hatched (pink) regions covering the range of half-maximum probability. The extinction correction significantly reduces the optical brightness, whereas the NIR brightness is less affected. The spatial distribution of the apparent magnitudes are shown in Figure \ref{fig:appmag}. Note that the optical and NIR magnitudes of the SBO emission are expected to be fainter than plateau magnitudes by about 1 mag and 2 mag, respectively \citep{tominaga11}. } \label{fig:OptNIR} \end{figure*} Since we make coordinated observation of the CCSN with Super-K and GW detectors, the time of the core bounce can be estimated from neutrino observations. This enables us to restrict the time window, e.g., to $[0, 60]$ ms from the estimated time of the core bounce, as well as the frequency window to be $[50, 500]$ Hz which is suggested to be roughly in the peak frequency range of prompt convection \cite[e.g.,][]{muellerb13}. We apply the gravitational waveform data, superposed on the noise signals, within the time--frequency window of $[0, 60]$ ms--$[50, 500]$ Hz to the analysis pipeline and obtain a reconstructed gravitational waveform. The left panel of Figure \ref{fig:tfgc} compares the inputted original gravitational waveform (solid red line) with the reconstructed one (dashed blue line), and the right panel shows the spectrogram of the reconstructed gravitational waveform. In this time--frequency window the noise dominates the reconstructed waveform and it is hard to see any time-dependent waveform structure. In the spectrogram, however, a highlighted region appears at $t \sim 20$ ms which originates from the prompt convection. This feature is observable. Figure \ref{fig:gcsnr} shows the signal-to-noise ratio (SNR) of time--frequency pixels defined by \begin{equation} {\rm SNR} = \int \frac{\tilde{x}(f,t)}{S_{\mathrm{n}}(f,t)} \, \mathrm{d}t \, \mathrm{d}f, \end{equation} where $\tilde{x}(f,t)$, and $S_{\mathrm{n}}(f,t)$, denotes time--frequency pixels, and onesided-spectrogram densities, at a given frequency and time, respectively. The left panel shows the result without a neutrino trigger, i.e., a GW search over more than 1 second, while the right panel shows the result considering timing information from neutrino observations. The maximal SNR for the prompt convection GW signal pixel increases from $\sim 3.5$ to $\sim 7.5$. The latter almost meets the conventional detection threshold. \subsection{Electromagnetic waves} \label{sec:galem} The first electromagnetic signal from a CCSN is the emission from SBO \citep[e.g.,][]{falk78,klein78,matzner99}. The effective temperature of the SBO emission is estimated to be $\sim 4 \times 10^5 $K. Thus, the emission peaks at UV wavelengths. However, as discussed below, CCSNe at the Galactic Center are likely to suffer from large interstellar extinction. Therefore, the observed spectral distribution of the SBO is likely not to peak at UV wavelengths, and observations in optical and NIR are more promising \citep{adams13}. For Type IIP supernovae, the SBO emission in optical and NIR wavelengths is expected to be fainter than the main plateau emission, which we discuss below, by about 1 mag and 2 mag, respectively \citep{tominaga11}. After cooling envelope emission following shock breakout emission \citep[e.g.,][]{chevalier08,nakar10,rabinak11}, Type IIP supernovae enter the plateau phase lasting about 100 days. The luminosity and duration of the plateau can be estimated by equations (\ref{eq:Lplt})--(\ref{eq:tplt}) using $M_{\rm ej}$, $E_k$, and $R_0$. The solid (blue) lines in Figure \ref{fig:OptNIR} show schematic light curves after the plateau phase for our s17.0 model placed at 8.5 kpc distance. The luminosity is then converted to optical ($V$-band, 0.55 $\mu$m) and NIR ($K$-band, 2.2 $\mu$m) magnitudes assuming a bolometric correction of $M_{\rm bol} - M_V \simeq 0$ and typical color of $M_V - M_K \simeq 1$ \citep{bersten09}. After the plateau phase, supernovae enter the radioactive tail phase, which is powered by the decay of $^{56}$Co\ (daughter of $^{56}$Ni\ synthesized at the explosion). For the bolometric luminosity at this phase, we assume the radioactive decay luminosity with an ejected $^{56}$Ni\ mass of 0.028 $M_{\odot}$. Then the luminosity is translated to optical and NIR magnitudes assuming $M_{\rm bol} - M_V \simeq 0$ and $M_V - M_K \simeq 2$ \citep{bersten09}. \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[scale=1.1, bb=0 0 500 200]{./fig8a.pdf} & \includegraphics[scale=1.1, bb=270 0 800 200]{./fig8b.pdf} \end{tabular} \caption{Top-down view of the Milky Way galaxy, colored by the expected CCSN plateau optical magnitude (left panel) and NIR magnitude (right). The SBO emission is fainter by 1--2 mag than shown. The Earth is positioned at $(x,y)=(-8.5,0)$ kpc. The NIR magnitude is always bright enough ($\lesssim$ 0 mag) irrespective of the location of the CCSN. On the other hand, the optical magnitude has a wide range depending on the location. Especially, the optical brightness of a CCSN at the opposite end of the Galaxy is significantly affected by dust extinction since the line-of-sight to the CCSN traverses more dusty regions (orange and red colors). The white dashed and grey dashed circles represent constant distances from the Earth and the Galactic Center, respectively. The circles are labeled by their radii, as well as the percentages of the Galactic CCSN rate that they contain. For circles centered on the Earth, the pointing accuracy that can be achieved by Super-K (SK) or Super-K with Gadolinium (SK-Gd) for a CCSN at the circumference is also labeled. } \label{fig:appmag} \end{center} \end{figure*} Despite these crude assumptions, Figure \ref{fig:OptNIR} still captures the important properties of optical and NIR emissions of Galactic CCSNe. Both optical and NIR emissions are extremely bright, $< -2 $ mag, if we do not take into account interstellar extinction (thick blue lines). However, since we have more probability to have CCSNe near the center of the Galaxy, the observed brightness will be significantly affected by interstellar extinction. Here we estimate the typical range of extinction as follows, in a similar way as was done by \citet{adams13}. First, we assume a simple three-dimensional Galactic model both for the CCSN rate and dust distribution: $\rho (R,Z) \propto e^{-R/R_d}e^{-z/H}$, where $R$ is the Galactocentric radius, $z$ is the height above the Galactic plane, and $R_d$ and $H$ are the scale length and scale height of the Galactic disk, respectively. We adopt $R_d = 2.9$ kpc and $H = 110$ pc as in \citet{adams13}. The Sun is placed at $R=8.5$ kpc and $z = 24$ pc. The normalization of the dust distribution is determined such that the extinction toward the Galactic Center is $A_V = 30$ mag. The expected brightness maps under these assumptions are shown in Figure \ref{fig:appmag}. The optical brightness has a large diversity depending on the position (left panel): it is brighter than 5 mag for nearby events while fainter than 25 mag behind the Galactic Center. On the other hand, the NIR brightness is brighter than 5 mag in almost all locations, as was also demonstrated by \citet{adams13}. Note that Figure \ref{fig:appmag} can also be used as the brightness distribution of the SBO emission if optical and NIR magnitudes are shifted down by about 1 mag and 2 mag, respectively. The expected observed light curves for CCSN events at $R<R_d$, which corresponds to about 27 \%\ of Galactic CCSNe, are shown in Figure \ref{fig:OptNIR}. The dashed (red) line shows the brightness of the most probable case and the hatched region covers the range of half-maximum probability. As shown in the figure, the optical brightness is significantly affected by the extinction. In contrast to the optical brightness, the NIR brightness is less significantly affected by extinction because extinction is much smaller in NIR, $A_K \sim 0.1 A_V$. Even in the worst case, the brightness at the plateau phase is as bright as 0-1 mag. This can be observed with the preparation of NIR telescopes/instruments usable for very bright objects (see discussions by \citealt{adams13}). \subsection{Multi-messenger observing strategy} \label{sec:galmm} As emphasized in \cite{adams13}, neutrinos provide crucial triggers to address important questions of \textit{IF} astronomers should look for a Milky Way CCSN, \textit{WHEN} they should look, and \textit{WHERE} they should look. Using our CCSN model based on a long-term numerical simulation, we follow the time-series of events following a core collapse. We emphasize the new insights gained when GWs are included in the discussion. To alert the community \textit{IF} one should look for a Galactic supernova, an alarm system called the SuperNova Early Warning System (SNEWS) is in place \citep{Antonioli:2004zb}. The system consists of a network of neutrino detectors that triggers an alert when multiple detectors in different geographical locations report a burst within a certain period. The system is rapid as it does not require human intervention. The Super-K detector is large enough that a core collapse in the Milky Way will yield a high statistics neutrino burst detection \citep{abe16}. EGADS (Evaluating Gadolinium's Action on Detector Systems), is situated next to Super-K and is planned to be developed into a neutrino burst monitor that reports a burst within the one-minute mark while minimizing both false positives and false negatives \citep{vagins12,adams13}. Acquiring accurate timing information is important for improving the sensitivity for the detection of GWs. Since the coherent network analysis does not assume a priori information about a waveform, data which do not contain GWs cause degradations for detection. The accurate timing information of core bounce will thus enable one to avoid such data. The neutrinos provide timing information with $1\sigma$ uncertainties of $\sim 10$ ms, and therefore the GW search can be started $10$ ms before the time indicated by the neutrino observation (hereafter referred to as $t_n$). The first strong amplitude of gravitational waveforms from a CCSN is known from numerous simulations to be driven by prompt-convection, and peaks within some $60$ ms after core bounce \citep[e.g.,][]{muellerb13}. The GW analysis time window can therefore be set to $[-10,50]$ ms from $t_n$, corresponding to a bounce time range of 0--60 ms. This procedure improves the maximal SNR of the prompt convection GW signal pixel from $\sim 3.5$ to $\sim 7.5$ (Figure \ref{fig:gcsnr}). Hence, with the coincident observation of neutrinos and GWs the detection of the GW can be claimed from the CCSN at the Galactic Center even for the case of our non-rotating progenitor. Similarly, acquiring accurate pointing information critically impacts the feasibility of rapid electromagnetic signal follow-up (Figure \ref{fig:opticaldetection}). As discussed above, detections in NIR wavelengths is relatively straightforward, i.e., it is almost always brighter than 5 mag. Such a bright emission can be detected with a small-aperture telescopes, whose field of views are sufficiently large in general. In optical wavelengths, however, the expected dust attenuation makes the early detection of the EM signals challenging. The upper panel of Figure \ref{fig:opticaldetection} shows the brightness distribution of the plateau emission in optical. Approximately 25 \% of Galactic CCSNe will have apparent magnitudes $> 25$ mag, which is difficult to observe even with the largest (8 m) optical telescopes. A further 9\% will be$\sim 20$--25 mag, and require a 4--8 meter class telescope, while the dominant 40\% with $\sim 5$--20 mag will require 1--2 meter class telescopes. The SBO emission in the optical wavelength is similar to the plateau brightness (the SBO is expected to be fainter only by about 1 mag), and thus, the upper panel of Figure \ref{fig:opticaldetection} roughly captures the brightness distribution of the SBO emission. Since the duration of the SBO emission is only about 1 hr (see Section \ref{sec:multimessengeremission} and Appendix \ref{sec:appnu}), continuous monitoring within the error circle is critical not to miss the very first SBO signal. Therefore, the positional determination by neutrino should be good enough compared with the sizes of field of views, which depends on the apertures of telescopes. The bottom panel of Figure \ref{fig:opticaldetection} shows field of views of telescopes as a function of typical magnitude limit. When the optical magnitude is brighter than $\sim 15$ mag, the early detection is feasible thanks to the wide field of views of small-aperture telescopes. However, fainter cases are more challenging since there is no $>$ 1m telescope with a field of view larger than $\sim 6$ degree diameter (green, blue, and red regions in Figure \ref{fig:opticaldetection}). Therefore, position determination better than 6 degree diameter is critical. To this end, the improved capability of a Gd-doped Super-K (SK-Gd), which enables the localization within 3 deg diameter for CCSNe at the Galactic Center, would be highly impactful. Furthermore, it must be emphasized that the optical transient may follow the neutrino burst in as short as a few minutes for a collapse of a compact Wolf-Rayet star progenitor. We conclude that pre-arranged followup programs will have an important role to play in securing the rise of the optical transient. \begin{figure} \begin{center} \includegraphics[width=0.38\textwidth, bb= 50 0 550 730]{./fig9.pdf} \caption{Detection prospects and strategies of the plateau signal of Galactic CCSNe. The top histogram shows the dust-attenuated plateau magnitudes with their respective percentage of the total CCSNe; 1.2\% and 24.5\% fall beyond the magnitude range shown. The optical magnitudes of the SBO emission are also similar to the plateau magnitudes \citep[the SBO emission is likely to be fainter by about 1 mag,][]{tominaga11}. The bottom panel shows the typical magnitude ranges and fields of view (FOV) of various optical telescopes: ASAS-SN, Blanco, CFHT, Evryscope, LSST, Pan-STARRS, Subaru, and ZTF (shaded rectangles), as well as the naked eye (left-pointing arrow). See text for details. The error circle in CCSN pointing from the CCSN neutrino burst, for Super-K with and without Gadolinium, are represented by the horizontal dashed lines and labeled. } \label{fig:opticaldetection} \end{center} \end{figure} \section{Extremely Nearby Supernovae} \label{sec:near} \subsection{Neutrino} \label{sec:nearnu} An extremely nearby CCSN opens new probes of the core-collapse phenomenon. An example is the detection of pre-CCSN neutrinos arising from silicon burning \citep{odrzywolek04}. Neutrinos are generated by nuclear processes, including beta decay and e$^\pm$ captures, as well as thermal processes including plasmon decay and pair annihilations \citep[e.g.,][]{Misiaszek:2005ax,Patton:2015sqt}. The beta decay and pair annihilation yield \textit{O}(10) MeV neutrinos, which can be detected by Hyper-K out to several kpc \citep{odrzywolek04}, and $\approx 660$ pc with 1 kton KamLAND \citep{Asakura:2015bga}. This will provide information about the pre-collapse progenitor \citep{Kato:2015faa} and act as an advanced warning of an imminent core collapse. \subsection{Gravitational Waves} \label{sec:neargw} \begin{figure} \begin{center} \includegraphics[width=1.15\linewidth,height=5.0cm,bb=30 0 1020 700]{./fig10a.pdf} \\ \includegraphics[width=.9\linewidth,height=5.5cm,bb=50 0 1000 800]{./fig10b.pdf} \\ \includegraphics[width=.9\linewidth,height=5.5cm,bb=50 0 1000 800]{./fig10c.pdf} \caption{ Reconstruction of the GW from Betelgeuse with the GW detector network H-L-V-K. In the top panel the red line shows the input signal while the blue line shows time series of the reconstructed signal. The reconstruction works very well and the blue line is almost hidden behind the red line. The central and bottom panels show the spectrograms of the reconstructed signals for the first 8 seconds and 1 second, respectively. } \label{fig:reconBetel1} \end{center} \end{figure} The detection of a GW from an extremely nearby CCSN is easy for advanced GW detectors such as aLIGO, adVirgo, and KAGRA. As an example, we perform simulations of the reconstruction of the GW from Betelgeuse, which is known to be in a late stage of stellar evolution and going to explode as a supernova within the next million years. The right ascension and declination of Betelgeuse is 5.9 hours and 7.4 degrees, and the distance from the Earth is $197$ pc. We adopt the GW signal of our long-term simulation to be the signal from ``Betelgeuse supernova''. In our 2D model, only the $+$ mode of the polarization of the signal is considered. The generation of simulated data and the analysis is almost same as in Section \ref{sec:galgw} except that the data segment is set to $14$ s, so that the signal is in one data segment for simplicity. In the upper plot of the Figure~\ref{fig:reconBetel1}, the red and blue plots represent the injected gravitational waveform and reconstructed time series signal, respectively. The reconstructed signal is shown to be matched with the injected waveform pretty well. The middle and bottom panels show the spectrogram of the reconstructed signal in a different time scale. To generate the spectrogram we set the data length of the fast Fourier transform to be $20$ ms. In remarkable contrast to the Galactic Center event (the right panel of Figure \ref{fig:tfgc}), such an extremely nearby supernova clearly presents time-evolving features. The main component at the early phase, spreading around $[100-700]$ Hz, corresponds to the prompt-convection signal and disappears until 100 ms after bounce. Then a monotonically increasing component appears, accompanied with a sub-signal around $[200-300]$ Hz. The increasing feature is well fitted by the Brunt-${\rm V\ddot{a}is\ddot{a}la}$ frequency at the PNS surface \citep{muellerb13}. The sub-signal would be originated from some hydrodynamical instabilities like SASI \citep{cerda13}. We inspect the detectability of these features in the same manner as in Section \ref{sec:galgw}. The left and right panels of Figure~\ref{fig:betelsnr} present the SNR in time-frequency pixels of the first 1 s and 8 s, respectively. In the first 1 s, the signals from the prompt-convection are clearly detectable with SNR $> 100$. The post-prompt-convection signals become outstanding at $\sim 200$ ms. There are some spots with SNR 10 -- 100 up to $\sim 800$ ms. Thereafter the GW energy is distributed to broad-band frequency regions and the energy of each time-frequency tile decreases. The reconstructed signals, however, still retain the time-frequency structure around 200--300 Hz up to $\sim 6.5$ s, of which SNR is 2--5. \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth,height=6cm,bb=0 0 800 600]{./fig11a.pdf} & \includegraphics[width=0.45\textwidth,height=6cm,bb=0 0 1000 765]{./fig11b.pdf} \end{tabular} \caption{ SNR of the GW from Betelgeuse for the first 1 second (left panel) and 8 seconds (right). The color is in logarithmic scale. } \label{fig:betelsnr} \end{center} \end{figure*} \subsection{Electromagnetic waves} \label{sec:nearem} It is obvious that an extremely nearby CCSN within $\lesssim 1$ kpc will become extremely bright both in optical and NIR wavelengths. For 1 kpc distance (distance modulus of 10 mag), the brightness of the plateau is expected to be about $-6$ mag. This is as bright as the recorded brightness of the Crab nebula \citep{stephenson02}. As is the case for SN 1054 or the Crab nebula, such CCSN explosion would be visible even in daytime. Thanks to the close distance, it is unlikely that the brightness is heavily affected by the interstellar extinction. \subsection{Multi-messenger observing strategy} \label{sec:nearmm} Extremely nearby CCSNe would provide unique information on CCSNe as well as pre-CCSN stellar evolution, and therefore, it is important not to miss any possible observables. Thanks to the pre-CCSN neutrino signal, we have a pre-CCSN alert for extremely nearby events. For CCSN neutrinos and GWs, such an alert is critical to prepare the detectors in case the detectors are not running due to, for example, maintenance works. For electromagnetic wave signals, the pre-CCSN signal provides an unique opportunity to observe massive stars just before their collapse. Although the positional localization with pre-CCSN neutrinos is difficult, there are not too many massive stars close to the sun, and thus, we can keep monitoring these stars after the detection of pre-CCSN neutrinos. Note that the small number of massive stars also means a low probability of CCSN rate. For example, the CCSN rate within 660 pc (detection range of 1 kton KamLAND) and 3 kpc (future Hyper-K) is about 0.1 and 4 \%\ of the Galactic CCSN rate, respectively. In Table \ref{tab:RSG}, we compiled a list of nearby RSGs from the literature. RSGs associated with OB associations are taken from \citet{levesque05}, \citet{humphreys78}, and \citet{garmany92}, and other RSGs are supplemented from \citet{humphreys70}, \citet[Table 6]{humphreys72}, \citet{white78}, \citet[Table 20]{elias85}, and \citet[Tables 1 and 3]{jura90}. Note that it is difficult to distinguish RSGs and asymptotic giant branch (AGB) stars completely, since they have an overlap in luminosity (see e.g., \citealt{levesque10}). For the purposes of listing possible CCSN progenitor candidates, we have chosen our selection to be rather inclusive. The list thus includes stars with luminosity class of II (or sometimes III), when the stars have been treated as RSG candidates in the literature, and the list may also include intermediate-mass stars. Lists of Wolf-Rayet stars, which are the progenitors of stripped envelope (H-poor) CCSNe, can be found in, e.g., \citet{vanderHucht01} and \citet{rosslowe15}. Our final RSG list consists of 212 RSG candidates. The typical distance limit is $\sim3$ kpc, which is fortunately similar to the distance limit for pre-CCSN neutrino detection with future neutrino detectors. These progenitor stars and their following electromagnetic CCSN signals are bright enough that monitoring observations soon after the pre-CCSN neutrino detection is feasible with small telescopes (Figure \ref{fig:opticaldetection}). \begin{table*} \caption{List of nearby RSG candidates} \begin{tabular}{lllllllll} \hline Name & RA & Dec & Distance & $V$ mag & Spec. type & Note & Type ref$^{a}$ & Dist. reff$^{b}$ \\ & (J2000.0) & (J2000.0) & (kpc) & & & & & \\ \hline BD+61 8 & 00:09:36.37 & $+$62:40:04.1 & 2.40 & 9.49 & M1ep Ib + B & KN Cas & 1 & 2 \\ BD+59 38 & 00:21:24.29 & $+$59:57:11.2 & 2.09 & 9.67 & M2 I & MZ Cas & 1 & 1 \\ HD 236446 & 00:31:25.47 & $+$60:15:19.6 & 2.40 & 8.71 & M0 Ib & & 1 & 3 \\ TY Cas & 00:36:59.42 & $+$63:08:01.7 & 2.40 & 11.5 ($B$) & M6 & & 1 & 3 \\ V634 Cas & 00:49:33.53 & $+$64:46:59.1 & 2.51 & 10.46 & M1 Iab & & 1 & 3 \\ HD 4817 & 00:51:16.38 & $+$61:48:19.8 & 1.05 & 6.18 & K5 Ib & HR 237 & 4 & 4 \\ HD 4842 & 00:51:26.00 & $+$62:55:14.9 & 2.51 & 9.62 & M6/7III & VY Cas & 1 & ** \\ BD+62 190 & 01:03:15.35 & $+$63:05:10.8 & 2.51 & 9.95 & M5? & & 1 & ** \\ BD+62 207 & 01:08:19.93 & $+$63:35:11.2 & 2.51 & 9.82 & M4 Iab & HS Cas & 1 & 2 \\ HD 236697 & 01:19:53.62 & $+$58:18:30.7 & 2.51 & 8.62 & M1.5 I & V466 Cas & 1 & 1 \\ \hline \multicolumn{9}{l}{Only the first ten rows are shown in this table. The full list is available in online material.}\\ \multicolumn{7}{l}{\textit{http://th.nao.ac.jp/MEMBER/nakamura/2016multi/}}\\ \multicolumn{9}{l}{$^a$ References for the spectral type.}\\ \multicolumn{9}{l}{$^b$ References for the distance. }\\ \multicolumn{9}{l}{References:}\\ \multicolumn{9}{l}{RSGs in OB associations: 1 \citet{levesque05}, 2 \citet{humphreys78}, 3 \citet{garmany92},}\\ \multicolumn{9}{l}{Other RSGs: 4 \citet{humphreys70}, 5 \citet{humphreys72}, 6 \citet{white78}, 7 \citet{elias85},}\\ \multicolumn{9}{l}{8 \citet{jura90}}\\ \multicolumn{9}{l}{ Additional references for the spectral type: * \citet{keenan89}, ** classification in SIMBAD}\\ \multicolumn{9}{l}{Additional references for the distance: $\dagger$ \citet{harper08}, $\ddagger$ \citet{choi08} }\\ \label{tab:RSG} \end{tabular} \end{table*} \section{Extragalactic Supernovae} \label{sec:extra} \subsection{Neutrino} \label{sec:extranu} Due to its large volume, Hyper-K opens the search for neutrinos from CCSNe occurring in nearby galaxies beyond the Milky Way and Andromeda Local Group \citep{Ando:2005ka}. Since the signal is not a statistically overwhelming burst, the detectability of the signal depends crucially on backgrounds. The main backgrounds in the \textit{O}(10) MeV energy range for pure water detectors are those due to invisible (sub-\v{C}erenkov) muons decays, atmospheric neutrinos, and spallation daughter decays \citep{Bays:2011si}. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth, bb=150 0 630 600]{./fig12.pdf} \caption{The probability of detecting at least a neutrino singlet (dot-dashed red) and at least a doublet (dashed red) from a CCSN of distance $D$ away with Hyper-K (left axis). Positron energy bin 18--30 MeV has been assumed. The equivalents with a Gadolinium-doped Hyper-K are shown by blue lines as labeled. On the same plot, the cumulative CCSN rate within distance $D$ is shown (right axis), estimated from observed CCSNe within the past 15 years (solid black) and the range of estimates based on observed galaxy $B$-band and star-formation rates (blue shaded band). The shaded band includes uncertainties arising from different indicators ($B$-band, H$\alpha$, and UV) as well as calibration factors (non-rotating and rotating stellar tracks). All estimates have been corrected for sky coverage incompleteness (see text). } \label{fig:local} \end{center} \end{figure} Driven by these backgrounds, the optimal energy window to search for CCSN neutrinos over backgrounds in pure water lies in the positron energy $18$--30 MeV range. Gadolinium doped water reduces backgrounds and opens a larger energy range of approximately 12--38 MeV for signal search \citep{Ando:2005ka}. To estimate the predicted signal, we adopt a fiducial volume of 0.56 Mton. The time-integrated total inverse beta event in the 18--30 MeV energy bin for our s17.0 model is then, \begin{equation} N_{e^+} \simeq 8.5 \left( \frac{D}{1 \, {\rm Mpc}} \right)^{-2} \left( \frac{M_{\rm det}}{\rm 0.56 \, {\rm Mton}} \right)^{-2}, \end{equation} for the fiducial MSW mixing. Considering a full-mixing or no-mixing results in a range of 8.2--8.6 events. With the fiducial MSW mixing, the probability to detect two (or more) coincident neutrinos from a CCSN 3 Mpc away is approximately 24\%. The probability to detect one or more neutrinos is 61\%. These improve to 50\% and 81\%, respectively, for the wider energy window allowed by a Gd-doped detector. In Figure \ref{fig:local}, we show the detection probability for at least a neutrino singlet (red dot-dashed) and at least a doublet (red dashed) from a CCSN in a nearby galaxy at distance $D$. The expected event increases with Gd-doped {\it Hyper-K}, and this is represented by the larger probabilities shown by the blue lines. We estimate the background rate based on SK-II, which has a lower Photo-Multiplier-Tube (PMT) coverage than standard Super-K configuration and perhaps similar to the eventual Hyper-K \citep{Abe:2011ts}. After cuts, the number of remaining backgrounds was 25 events in the signal region (here defined as 18--30 MeV energy deposited and 38--50 \v{C}erenkov angle), for a fiducial volume of 22.5 kton and 794 days of exposure \citep{Bays:2011si}. This scales to a background rate of $\sim 286$ events per 0.56 Mton year. While this is a simplified estimate ignoring differences in, e.g., signal efficiency and rock overburden, Hyper-K detector capabilities are not yet finalized, and it provides a useful starting point. The accidental coincidence rate within a 10 second window where the CCSN neutrino signal will occur is thus $\sim 2 \times (286 \, {\rm yr}^{-1})^2 \times (10 \, {\rm s}) = 0.05 \, {\rm yr}^{-1} $. In other words, two (or more) coincident events within a 10 second window would provide a compelling detection of CCSN neutrinos. For singlet event detection, the background rate is a significant limiting factor. However, two endeavors will enhance the identification of singlet signals. The first is using the optical discovery of the CCSN as a trigger, narrowing down the time window of neutrino search. The background rate is $\sim 0.03$ hr$^{-1}$, and by narrowing the time window to a few hours, the background event will be a small multiple of 0.03 (see Section \ref{sec:extraem} for more details). The second endeavor is background rejection due to Gadolinium doping. In pure water, the neutron left behind in IBD is captured on free protons and not registered by the detector. By doping the water with gadolinium, the gadolinium readily captures the neutron and produces a cascade of gamma-rays upon decay, which can be detected by the PMTs \citep{Beacom:2003nk}. This way, IBD and background can be separated with high efficiency. Dominant backgrounds could be reduced by a factor $\sim$5, opening up the wider energy range for neutrino searches. The occurrence rate of such opportunities is overlaid in Figure \ref{fig:local}, where the cumulative CCSN rate estimated directly from CCSN discoveries is shown (black solid line). We adopt the most recent 15 years, from 2000--2014 inclusive. We start with the collection of CCSNe in the past 2000--2011 studied in \cite{Horiuchi:2013bc}, and update it with recent discoveries: SN~2012A in NGC~3239, SN~2012aw in NGC~3351, SN~2014ec in NGC~3351, and SN~2013ej in NGC~628. For all, we take distances from the The 11Mpc H$\alpha$ and Ultraviolet Galaxy Survey \citep[11HUGS;][]{Kennicutt:2008ce} catalog of galaxies. There has been on average one CCSN per year within $\sim 6$ Mpc. Put another way, pure water Hyper-K will will have some $\sim 20$\% chance of detecting neutrino doublets or greater every $\sim 3$ years. The CCSN rate can also be estimated from local galaxy catalogs. For this, we adopt the CCSN estimates based on galaxy $B$-bands and galaxy type as reported by \cite{Li:2010kd}, as well as four estimates based on galaxy star-formation rate measurements: two indicators, H$\alpha$ and UV, each with two stellar evolutionary tracks, non-rotating and rotating. For all estimates, values were first obtained for a well-surveyed patch of sky corresponding to the survey area of the Local Volume Legacy Survey \citep[LVL;][]{Dale:2009zm}, then corrected for the sky that were not surveyed. The sky-coverage correction factor is $1.98$ \citep{Dale:2009zm}. Since the scatter between different estimates is larger than the formal uncertainties of each estimate, we opt to show a band which encapsulates the estimates from our five methods. Figure \ref{fig:local} shows that, as pointed out previously \citep{Ando:2005ka,Kistler:2008us,Horiuchi:2013bc}, the directly estimated CCSN rate is larger than indirect estimates. \subsection{Electromagnetic waves} \label{sec:extraem} \begin{table*} \caption{List of local galaxies within 5 Mpc ordered by their expected CCSN rates. } \begin{tabular}{lcccccc} \hline Name & RA [$^\circ$] & dec [$^\circ$] & Dist [Mpc] & log($L_{H\alpha}$) & Abs.~$B$-band & CCSN rate [yr$^{-1}$] \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline NGC5236 & 204.253 & -29.866 & 4.47 & 41.25 & -20.26 & 0.024 \\ NGC253 & 011.888 & -25.288 & 3.94 & 40.99 & -20.00 & 0.013 \\ NGC3034 & 148.968 & +69.680 & 3.53 & 41.07 & -18.84 & 0.012 \\ NGC5128 & 201.365 & -43.019 & 3.66 & 40.81 & -20.47 & 0.009 \\ NGC3031 & 148.888 & +69.065 & 3.63 & 40.77 & -20.15 & 0.008 \\ Maffei2 & 040.479 & +59.604 & 3.30 & 40.76 & -20.22 & 0.008 \\ UGC2847 & 056.704 & +68.096 & 3.03 & 40.65 & -20.58 & 0.007 \\ NGC4945 & 196.365 & -49.468 & 3.60 & 40.75 & -19.26 & 0.006 \\ NGC2403 & 114.214 & +65.603 & 3.22 & 40.78 & -18.78 & 0.006 \\ NGC4449 & 187.047 & +44.093 & 4.21 & 40.71 & -18.17 & 0.005\\ \hline \multicolumn{7}{l}{Only the first ten rows are shown in this table. The full list is available in online material.}\\ \multicolumn{7}{l}{\textit{http://th.nao.ac.jp/MEMBER/nakamura/2016multi/}}\\ \multicolumn{7}{l}{The CCSN rates are derived from their $B$-band magnitudes and the LICK SNuB rates. }\\ \multicolumn{7}{l}{The galaxy catalog is based on the compilation of \cite{Karachentsev:2013ipr}.}\\ \multicolumn{7}{l}{The columns show (1) the galaxy name, (2,3) RA and declination, (4) distance, }\\ \multicolumn{7}{l}{(5) log of H$\alpha$ luminosity, (6) absolute $B$-band magnitude, (7) derived CCSN rate.} \\ \label{tab:galaxies} \end{tabular} \end{table*} We do not discuss the electromagnetic wave signals from extragalactic events in detail as they are already well observed. From the multi-messenger point of view, the detection of neutrinos from extragalactic events is a significant step for CCSN studies. As discussed in Section \ref{sec:extranu}, for singlet neutrino events, we can reach several Mpc with Hyper-K. In order to firmly associate singlet neutrino detections with CCSNe, we need very early detection of electromagnetic emission, within $\lesssim 1$ days after the explosion. For exceptionally well-observed CCSN, the early light curve can be used to estimate the collapse time to within a few hours \citep[e.g.,][]{Cowen:2009ev}, reducing the number of single background events at neutrino detectors. In other words, electromagnetic surveys act as triggers and background reduction for neutrino searches. For this purpose, the entire sky area should be densely monitored not to miss any CCSNe within $\sim 5$ Mpc. Recent surveys of transients make it possible to obtain early light curves on a more regular basis. Since the absolute magnitude of the very early phase ($< 1$ day) of CCSN is about $-14$ to $-16$ mag in optical \citep{tominaga11}, it will be as bright as $14.5 - 12.5$ mag at 5 Mpc (distance modulus of $28.5$ mag). This can be detected with small aperture telescopes with $<1$m diameter. Ongoing projects, such as ASAS-SN \citep[][]{shappee14} and Evryscope \citep{law15}, will be suitable to monitor a wide area, and to detect the very early phase of very nearby CCSNe (Figure \ref{fig:opticaldetection}). These will provide a significantly better census of CCSNe in nearby galaxies, each with very early light curves. In the following section, we discuss the observing strategy in more detail. \subsection{Multi-messenger observing strategy} \label{sec:extramm} With only several neutrino events, the neutrino burst from an extragalactic CCSN will reveal IF to look, but it is challenging to determine WHERE to look. One method to narrow down the survey area is to use a precompiled list of galaxies. For this purpose, the nearby galaxy catalog of \cite{Karachentsev:2013ipr} is the most complete. Figure \ref{fig:skyplot} shows the distribution of nearby galaxies in right ascension (the declination has been compressed). We label a few of the prominent clusters of galaxies. These structures explain the origin of sharp rise in the CCSN rate between 3 and 4 Mpc in Figure \ref{fig:local}. Also labeled is NGC 253, a prominent nearby star-forming galaxy. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth, bb = 50 50 500 570]{./fig13.pdf} \caption{Sky plot showing the positions of nearby galaxies within 4 Mpc of the Milky Way based on the galaxy catalog of \citet{Karachentsev:2013ipr}. Clusters of galaxies in the M81 and Cen A clusters are labeled. M31, the nearest neighboring spiral galaxy, and NGC 253, a prominent star-forming galaxy, are labeled. } \label{fig:skyplot} \end{center} \end{figure} The CCSN rates of the galaxies will serve as a probabilistic guide for which galaxy hosted the CCSN, which can further strategically narrow the area to survey. To estimate the CCSN rate of each galaxy, we adopt a subset of the Karachentsev catalog for which the star formation rate can be observationally estimated. About half of the nearby galaxies have multi-band observations from Spitzer-IR, H$\alpha$, to GALEX-UV, providing diagnostics of the star formation rate. For example, the Karachentsev catalog contains 236 galaxies within 5 Mpc, compared to the H$\alpha$ catalog's 131 \citep{Kennicutt:2008ce}, the UV catalog's 154 \citep{GildePaz:2006bw}, and the IR catalog's 112 \citep{Dale:2009zm}. Fortunately, the incompleteness is most pronounced for the least massive galaxies, and does not dramatically impact the completeness of the most important galaxies. We use the H$\alpha$ luminosity to estimate the CCSN rate. The H$\alpha$ flux is first corrected for [NII] line contamination, underlying stellar absorption, and Galactic foreground extinction following \cite{Kennicutt:2008ce}. Table \ref{tab:galaxies} shows the derived H$\alpha$ luminosities. Internal attenuation is corrected for using the empirical scaling correction with the host galaxy $B$-band magnitude derived in \cite{Lee:2009by}, \begin{equation} A_{H\alpha}=\left\{ \begin{array}{lll} 0.10 & M_B > -14.5\\ 1.971+0.323M_B+0.0134M_B^2 & M_B \leq-14.5 . \end{array} \right. \end{equation} We adopt a H$\alpha$ to star formation calibration of $7.9 \times 10^{-42} \, {\rm M_\odot yr^{-1} / (erg \, s^{-1})^{-1}}$ \citep[e.g.,][]{Kennicutt:1998zb}, which assumes a Salpeter initial mass function and mass range $0.1$--$100 M_\odot$. This factor can vary by $\sim 20$\% by stellar evolution codes and stellar rotation \citep{Horiuchi:2013bc}. Assuming that all stars with mass range $8$--$100 M_\odot$ undergo core collapse, the star formation to CCSN rate conversion is $0.0074 /M_\odot$ for the Salpeter initial mass function. Changing the lower mass limit to $9.5 M_\odot$ as estimated by progenitor studies \citep{Smartt:2015sfa} reduces this by $\sim 20$\%. The conversion is not strongly sensitive to the upper mass cut, e.g., adopting $40 M_\odot$ instead of $100 M_\odot$ only reduces the conversion by $\sim 8$\%. The resulting CCSN rates are summarized in the final column of Table \ref{tab:galaxies}, and a full list is available in online materials. We find that the 10 (20) nearby galaxies with the largest expected CCSN rate within the local 5 Mpc contain more than 60\% (87\%) of the total CCSN rate in the entire 5 Mpc volume, highlighting the strategic importance of the nearby large galaxies. \section{Summary and Discussion} \label{sec:summary} In this paper, we consider multi-message signals of CCSNe (in neutrinos, GWs, and electromagnetic waves) occurring at three distance regimes (the Galactic Center, a very nearby location, and extragalactic distances). Our signal predictions are self-consistently calculated based on a long-term simulation of an axisymmetric neutrino-driven core-collapse explosion \citep{nakamura15} initiated from a non-rotating solar-metallicity progenitor with ZAMS mass of $17 \, M_{\odot}$. We consider both current and upcoming multi-messenger detectors and investigate the physics reach of a future CCSN event. The detectability of multi-messenger signals are summarized in Table \ref{tbl:summary}, and our main findings are summarized below. For a CCSN occurring at the Galactic Center 8.5 kpc away, the neutrino burst will be an effective trigger for subsequent multi-messenger observations. The neutrino signal determines the time of core bounce to within several milliseconds. This high accuracy estimation of the bounce time allows the time window of GW analysis to be reduced, which greatly reduces the noise and makes it possible to obtain a signal-to-noise ratio larger than that which would be possible without the neutrino trigger (Figure \ref{fig:gcsnr}). Secondly, the neutrino signal provides pointing information. A pure water or gadolinium-doped Super-K will reveal the CCSN direction within an error circle of $\sim 6^\circ$ or $\sim 3 ^\circ$, respectively. The latter is particularly important as it corresponds to the field of view of large ($>1$ m) modern optical telescopes (Figure \ref{fig:opticaldetection}) which are necessary for observing the SBO emissions of CCSNe occurring at the Galactic Center and other side of the Milky Way (Figure \ref{fig:appmag}). In other words, the pointing accuracy will considerably facilitate optical followup. An extremely nearby CCSN, while rare, would provide unique information of the CCSNe as well as pre-CCSN stellar evolution. The pre-CCSN neutrino signals enable us to diagnose the core structure of the CCSN progenitor and to prepare detectors for observing upcoming signals. For such an extremely nearby event, the high-precision waveform reconstruction can be done (Figure \ref{fig:reconBetel1}, top panel) using the coherent network analysis of aLIGO, adVirgo and KAGRA. Ceaseless monitoring of nearby massive stars is essential not to miss the possible opportunities of such a rare event. To aid early followup, we compiled a list of nearby RSG candidates (Table \ref{tab:RSG}). At the other extreme, next-generation neutrino detectors will have sensitivity to the neutrino burst from CCSN in nearby galaxies within a few Mpc. To assist early followup, we compiled a list of nearby galaxies (Table \ref{tab:galaxies}) including their CCSN rates estimated from the H$\alpha$-derived star formation rates. Within the horizon of next-generation detectors, the top 10 galaxies host more than 60\% of the total CCSN rate, so the number of galaxies which need followup is rather limited. For the GW detection, the horizon does not go beyond $\sim 100$ kpc for our 2D (non-rotating) model even using ET (Table \ref{tbl:summary}), however, it may extend to $\sim$ Mpc distance scale if non-axisymmetric instabilities with more efficient GW emission would be captured in 3D models \citep{Kuroda14}. Our CCSN model is based on a 2D simulation using an approximation for neutrino transport. These numerical aspects, as well as multi-messenger signal predictions, ultimately need to be investigated by using self-consistent three-dimensional (3D) models with long-term evolution of at least $\sim 10$ s postbounce. Considering the rapid increase in supercomputing power \citep{Kotake12_ptep}, we speculate that we could have access to such systematic 3D models in the not too distant future. In the near future, detectors such as KAGRA and LSST will go online, and over the next decades, advanced detectors such as DUNE, ET, and Hyper-K. Collectively, these detectors promise to deliver rich data from a future Galactic CCSN, which will be indispensable for unraveling the nature of the CCSN mechanism. Much theoretical investigations are necessary to prepare for this golden event, and our long-term 2D simulation with self-consistent multi-messenger signal predictions is the first step in a string of necessary studies. \section*{Acknowledgements} We thank John Beacom and Shoichi Yamada for valuable discussions and helpful comments. This study was supported in part by the Grants-in-Aid for the Scientific Research of Japan Society for the Promotion of Science (JSPS, Nos. 24244036, 24740117, 26707013, 26870823, and 15H02075) and the Ministry of Education, Science and Culture of Japan (MEXT, Nos. 24103005, 24103006, 25103515, 26000005, 26104001,15H00788, 15H00789, 15H01039). \bibliographystyle{mn}
2007.14859
\section{Introduction} \label{sec_intro} Mobile networks aim to support the emerging \emph{high-rate} applications (e.g., virtual/augmented reality) and also be \emph{adaptive} to spatio-temporal variability in wireless traffic demands~\cite{Kibilda_spatio-temporal_traffic}. \emph{Dynamic} positioning of mobile relays, such as vehicular road side units (RSU) mounted over trucks, can achieve the expected \emph{high data-rate} along with the \emph{dynamic adaptation} to varying demands. Toward designing such \emph{relay-assisted} wireless networks, two \emph{challenges} are considered in this paper. The first challenge focuses on maximizing the \emph{network} flow rate through the optimal \emph{positioning} of relays, which can enable \emph{simultaneous} relay-assisted parallel routes. The second challenge focuses on maximizing the \emph{link} data rate, by designing multi-antenna \emph{beamforming codebooks} that depend on relay positions and spatially-correlated wireless channels. In this paper, we propose \emph{brain-inspired geometric-based} approaches to tackle these two challenges. \subsection{Relay Positioning and Maximum Flow} Maximizing the algebraic connectivity of network graphs have been utilized in finding positions of 2-dimensional (2-d) relays~\cite{Ibrahim_TWC_Connectivity_2009} or 3-d unmanned aerial vehicles (UAVs)~\cite{Mai_2019_Elsevier_UAV_IAB, rahmati2019dynamic}. While maximizing the algebraic connectivity will naturally increase the network flow rate~\cite{rahmati2019dynamic}, it does not achieve the maximum flow rate, as we will show later in this paper. Therefore, we aim to find an alternative optimization metric that can be utilized in positioning relays toward achieving higher network flow rate. To do so, we turn our attention to brain networks and \emph{Riemannian geometry}~\cite{2019_Intro_RiemGeometry}. Riemannian geometry has been considered in classifying functional connectivity patterns associated with unique brain tasks (e.g., memory or subtraction)~\cite{2016_Riem_Brain_Decoding}. Such brain classification serves as the main \emph{inspiration} for this paper as follows. Having two functional connectivity patterns that are distinguishable from each other over Riemannian manifolds~\cite{2018_Lee_Book_RiemManifolds} resembles having two \emph{parallel} data flows, which in turn leads to higher network flow rate. We note that covariance matrices of connectivity paths are represented over Riemannian manifolds given their \emph{symmetric positive definite} (SPD) characteristics. Consequently, Riemannian metrics, such as the Log-Euclidean metric (LEM)~\cite{2006_LEM_Arsigny}, have been utilized for task classification. In this paper, we geometrically represent regularized \emph{Laplacian} matrices of relay-dependent network graphs, which are SPD ones, over Riemannian manifolds. Consequently and inspired by the LEM-based brain-tasks classification, we identify the optimum relay positions as the ones achieving maximum LEM, compared to baseline network with no relays. We show that the proposed LEM-based relay positioning scheme almost achieves the \emph{maximum flow rate} and can serve as a low-complexity solution for the maximum flow problem~\cite{Edmonds_MFP}. Moreover, we identify parallel (independent) multi-hop routes as the ones with maximum LEM among each other. \subsection{Beamforming Codebook Design} As the maximum \emph{network} flow rate is achieved, through the LEM-based relay positioning, we turn our attention to maximizing the relay-user \emph{link} rate. Each optimally-positioned multi-antenna relay will communicate with each of its adjacent users by first estimating its channel vector and then assigning a suitable beamforming codeword. Generally, the relative position of each relay to its users will vary from one relay to the other. Therefore and taking into consideration the practical scenario of \emph{spatially-correlated fading channels}~\cite{2016_Debbah_MassiveMIMO_Covariance}, we aim to design a \emph{unique} beamforming codebook for each relay. In designing relay-dependent beamforming codebooks, we represent channel covariance matrices of relay-user spatially-correlated fading channels, which are SPD ones, over Riemannian manifolds. Each user channels follow an exponential correlated fading model~\cite{Clerckx_2008}, which depends on the user's relative location to the relay. We note that Riemannian geometry has been recently considered in designing beamforming vectors~\cite{2019_Fan_Riem_MassiveMIMO, 2017_Chen_Riem_MUI, 2016_Letaief_Riem_mmWave_Precoding}. While these research works present novel geometric perspectives of beamforming design, they have not utilized the \emph{SPD characteristics} of correlated channels. In this paper, we propose to employ \emph{a LEM-based geometric support vector machine} (SVM) model to learn the channel covariance matrices of different users over Riemannian manifold. Once distinct groups of these matrices are identified, beamforming codewords are selected to nearly match these channel groups. Any new estimated channel will be classified to one of the groups, based on LEM distance, and assigned a matched beamforming codeword accordingly. We show that the proposed machine learning model requires small number of training samples to approach the link capacity. \section{System Model}\label{sec_mod} In this section, we present brief preliminaries on Riemannian geometry then introduce the system model. Topological manifolds are spaces that locally resemble the $N$-d real coordinate space $\mathbb{R}^N$, i.e., they can be locally parameterized by $N$ coordinates. Differential manifolds~\cite{2016_DiffGeom_Carmo} are topological ones with smooth changes of coordinates (maps from $\mathbb{R}^n$ to $\mathbb{R}^n$). Tangent space of a differential manifold at some point is a vector space of all vectors that are tangent to the manifold at that point. \emph{Riemannian} geometry is the study of Riemannian manifolds~\cite{2019_Intro_RiemGeometry}, which are differential manifolds with some metric. A Riemannian metric determines an \emph{inner product} on each tangent space, and it measures the \emph{distances} or angles of curves on the Riemannian manifold. Finally, SPD matrices lie on Riemannian manifold and LEM is a valid Riemannian metric for SPD matrices~\cite{2006_LEM_Arsigny}. A given network can be represented as an undirected finite graph $G(V,E)$, where $V=\{v_1, v_2, \cdots, v_n\}$ is the set of all $n$ nodes and $E$ is the set of all $m$ edges. Considering the standard \emph{disk model}, two nodes are connected if their inter-distance is less than a specific threshold $R$. For an edge $l$, $1 \leq l \leq m$, connecting nodes $\{v_i,v_j\}\in V$, define the edge vector ${\mathbf{a_{l}}} \in \mathbb{R}^n$, where the $i$-th and $j$-th are given by $a_{l,i} = 1$ and $a_{l,j} = -1$, respectively, and the rest is zero. The incidence matrix $\mathbf{A} \in {\mathbb{R}^{n \times m}}$ of the graph $G$ is the matrix with $l$-th column given by $\mathbf{a_{l}}$. The \emph{Laplacian} matrix $\mathbf{L} \in \mathbb{R}^{n \times n}$ is defined as $\mathbf{L} = \mathbf{A}\,\mathbf{A}^T$, where $T$ denotes matrix transposition. Laplacian matrices are \emph{positive semi-definite}, and their second smallest eigenvalue, $\lambda_2(\mathbf{L})$, is the graph \emph{algebraic connectivity}~\cite{2006_Ghosh_Fiedler}. Given that the Laplacian matrices are positive semi-definite, a simple regularization step~\cite{2015_Riem_Brain_Class} is implemented to produce a \emph{regularized} SPD Laplacian matrix as \begin{equation} \mathbf{S} = \mathbf{L} + \gamma \, \mathbf{I} = \mathbf{A}\,\mathbf{A}^T+ \gamma \, \mathbf{I} \,, \label{S_matrix} \end{equation} where $\mathbf{I}$ is the $n \times n$ identity matrix and $\gamma$ is an arbitrary small scalar (e.g., $\gamma=0.5$). The regularized SPD Laplacian matrix $\mathbf{S}$ lies on Riemannian manifold, and the LEM between two SPD matrices, $\mathbf{S}_1$ and $\mathbf{S}_2$, can be calculated as~\cite{2006_LEM_Arsigny} \begin{equation} \mathcal{D}(\mathbf{S}_1,\mathbf{S}_2)= ||\log(\mathbf{S}_1) - \log(\mathbf{S}_2)||_F^2 \;, \label{eqn_LEM} \end{equation} where $||\,.\,||_F$ denotes the matrix Frobenius norm. \section{Relay-based Maximum Flow Problem Formulation} \label{prob_form} In this section, we formulate the problem of relay positioning as a maximum flow problem. We first assume that there exist only $Z$ candidate locations for the deployment of the available $K$ relays, where $K < Z$. Let $p_k$ be the $(x,y)$ position of the $k$-th relay and $\mathbf{P} = [p_1 \, p_2, \cdots, p_K]^T$ be the $K \times 2$ matrix containing positions of all $K$ relays. Deploying a relay in a potential location creates edges between two or more network nodes that are within distance $R$ of the relay location (disk model). Consequently, new edges are added to the original network leading to a new set of edges, denoted as $E(\mathbf{P})$. Furthermore the capacity of link $(i,j) \in E(\mathbf{P})$ between nodes $\{v_i, v_j\} \in V$, denoted as $f_{i,j}$, is either $1$ if their inter-distance is less than the disk radius $R$, or $0$ otherwise. For a given source node, $s \in V$, and relay positions, $\mathbf{P}$, the maximum flow problem is formulated as \begin{align} \max & \; \; f(s, \mathbf{P})\;= \sum_{j:(s,j)\in E(\mathbf{P})} f_{s,j} \;,\nonumber \\ \text{s.t.} &\; \sum_{i:(i,j)\in E(\mathbf{P})} f_{i,j} - \sum_{u:(j,u)\in E(\mathbf{P})} f_{j,u} = 0 \;, \; \forall j \in V \backslash \{s,d\} \,, \nonumber \\ & f_{i,j} = \{0,1\}\;, \; \forall (i,j) \in E(\mathbf{P}), \label{opt_MFP} \end{align} in which we aim to maximize the amount of flow generated from the source, $s$, towards its destination, $d \in V$, subject to both conservation and capacity conditions. By considering every node in the graph as a potential source node, the average maximum flow rate of the network is computed as $f(\mathbf{P}) = \frac{1}{n} \, \sum_{s \in V} f(s,\mathbf{P})$. The optimum $K \times 2$ positions matrix, $\mathbf{P}^*$, is the one achieving maximum value of $f(\mathbf{P})$, i.e., \begin{equation} \mathbf{P}^* = \argmax_{\mathbf{P}} \frac{1}{n} \; \sum_{s \in V} \sum_{j:(s,j)\in E(\mathbf{P})} f_{s,j} \,. \label{opt_L} \end{equation} Calculating the maximum flow for a given source, $s$, and relay positions, $\mathbf{P}$, requires complexity of $\mathcal{O} (|V|\, |E(\mathbf{P})|^2)$ using the Edmonds–Karp algorithm~\cite{Edmonds_MFP}. In the next section, we show how such high-complex problem can be mapped to a lower-complexity one. \section{LEM-based Relay Positioning Scheme} \label{solution} In this section, we introduce our proposed brain-inspired problem transformation to be addressed through Riemannian geometry, then we describe the proposed solution. \subsection{Relay Positioning through Brain-inspired Geometric Lens} \begin{figure}[htbp] \centerline{ \includegraphics[width=0.4\textwidth]{Figs/Brain_inspiration.pdf}} \caption{\small Brain-inspired problem transformation.} \label{fig_brain_inspired} \vspace{-0.2in} \end{figure} Fig.~\ref{fig_brain_inspired} shows the transformation from a maximum flow problem to a geometric-based one, which stems from functional connectivity analysis in brain networks. Fig.~\ref{fig_brain_inspired}~(a) depicts a simplified version of the results presented in~\cite{2016_Riem_Brain_Decoding}, which indicates that different brain tasks such as memory and subtraction have \emph{distinguishable} data flows among brain regions. On one hand, having independent data paths, which can be seen as a multiple-source multiple-sink maximum flow problem~\cite{1989_Miller_MSMS_Graph}, increases the network flow rate. On the other hand and as shown in Fig.~\ref{fig_brain_inspired}~(b), these two paths are represented as two \emph{separable points} over Riemannian manifolds. Such tasks classification is possible by considering distance-based Riemannian metric such as LEM~(\ref{eqn_LEM}). Therefore increasing the network flow rate, via enabling multiple independent paths, can be equivalent to increasing the LEM among the paths' geometric representations over Riemannian manifold. Consequently and as shown in Fig.~\ref{fig_brain_inspired}~(c), the optimum positions of relays can be defined as the ones achieving maximum LEM as compared to the baseline (no-relay) network scenario. In other words, potential relays that result in maximum LEM will lead to independent (i.e., vertex-disjoint or edge-disjoint) paths and hence higher network rate. As any potential relay locations matrix, $\mathbf{P}$ results in new edge matrix $E(\mathbf{P})$, then the equivalent regularized Laplacian matrix, $\mathbf{S}_\mathbf{P}$, can be computed as in (\ref{S_matrix}) using the updated edge set $E(\mathbf{P})$. We note that $\mathbf{S}_\mathbf{P}$ is an SPD matrix, represented as a point on the Riemannian manifold. Therefore, the impact of adding $K$ relays at locations $\mathbf{P}$ can be measured by computing the LEM as $\mathcal{D}(\mathbf{S}_\mathbf{P},\mathbf{S}_b)$, where $\mathbf{S}_b$ represents the regularized Laplacian matrix of the baseline network. The optimum relay locations, $\mathbf{P}^*$, is the one achieving maximum LEM value. In other words, the optimization problem for the relay deployment in (\ref{opt_L}) is transformed to a geometric-based equivalent one as \begin{equation} \mathbf{P}^* = \argmax_{\mathbf{P}} \mathcal{D}(\mathbf{S}_\mathbf{P},\mathbf{S}_b) \,. \label{opt_L_LEM} \end{equation} \subsection{LEM-based Relay Positioning and Parallel Routing} \label{relay-to-relay} The optimum relay positions can be found by solving (\ref{opt_L_LEM}), which can be efficiently computed using optimization techniques over manifolds~\cite{2009_Absil_RiemGeometry} and also geodesically convex optimization~\cite{2016_MIT_Geodesic_Convex} approaches. However in this paper and as a preliminary proof of concept, we have used iterative exhaustive search approach. More specifically, the optimum position of the first relay is determined by choosing the location that maximizes the LEM, compared to the baseline network with no relays. In other words, exhaustive search is conducted over all $Z$ potential locations and the optimum position is the one satisfying (\ref{opt_L_LEM}). Once the first relay is chosen, it is added to the baseline network and the same exhaustive search is repeated to find the best position for the second relay. Such algorithm continues by adding one relay at a time, until all $K$ relays have been optimally positioned. Once all relays are optimally positioned, clusters are formed around each relay, which acts as a cluster head. Each network node is then associated with a cluster, based on its shortest distance towards the cluster head. Multi-hop communication between any two non-adjacent nodes, $\{v_i, v_j\} \in V$ occurs through relays. Let $W$ denote the set of all combinations of relay-to-relay routes. Furthermore, each possible route, $R_a$, where $a \in W$, can be represented as a point on Riemannian manifold with regularized Laplacian matrix, $\mathbf{S}_a$. The LEM between any two routes, $\{R_a, R_b\}$, is computed as $\mathcal{D}(\mathbf{S}_a,\mathbf{S}_b).$ Towards increasing the network flow rate, we aim to establish multiple parallel paths (routes) among different relays, enabling parallel cluster-to-cluster communication. Parallel routes can be practically defined as the ones with minimum number of overlapping nodes or edges. Consequently and given the brain inspiration, discussed in Fig.~\ref{fig_brain_inspired}, we propose to identify parallel routes as the ones having \emph{maximum LEM} over Riemannian manifold. So data packets can \emph{simultaneously} traverse two optimal relay-to-relay routes $\{R_a^*, R_b^*\}$ given that \begin{equation} \{R_a^*, R_b^*\} = \argmax_{ \{a,b\} \in W} \mathcal{D}(\mathbf{S}_a,\mathbf{S}_b) \;. \label{eqn_LEM_routing} \end{equation} As a preliminary proof of concept, exhaustive search can be conducted to calculate the LEM among all relay-to-relay routes, and the two routes satisfying (\ref{eqn_LEM_routing}) are chosen. In the future, we will consider alternative approaches in solving (\ref{eqn_LEM_routing}). \section{Geometric Machine Learning for Beamforming Codebook Design} \label{sol_beamforming} As Section~\ref{solution} focused on optimal relay positioning and inter-cluster (relay-to-relay) multi-hop communication, this section completes the remaining link by focusing on intra-cluster (relay-to-user) communication. \subsection{Spatially-correlated Channel Modeling} We consider having multiple antennas, $M$, for each relay, while single antenna for each user (network node). It is often assumed that multiple-antenna channels are independent and hence their covariance matrices are scaled version of the identity matrix. However, such assumption is not a practical one, as multiple-antenna channels are generally spatially-correlated~\cite{2016_Debbah_MassiveMIMO_Covariance}. A multiple-input single-output (MISO) channel between a given relay and its user $u$, denoted as $\mathbf{h}_{u}$, can be modeled as a correlated Rayleigh fading channel vector with covariance matrix $\mathbf{Q}_{ku} \in \mathbb{C}^{M \times M}$, i.e., $\mathbf{h}_{u} \sim \mathbb{CN}(\mathbf{0}, \mathbf{Q}_{u})$. Covariance matrices can be generated according to the Clerckx exponential correlation model~\cite{Clerckx_2008}, which depends on the inter-antenna spacing as well as on a \emph{phase} component that is uniformly-distributed over $[0, 2\, \pi]$ to reflect the user's location. \begin{table}[htbp] \vspace{0.1in} \caption{\small Network simulation parameters.} \vspace{-0.1in} \begin{center} \begin{tabular}{|l|l|} \hline \textbf{Parameter}&\textbf{Value} \\ \hline Deployment area & $6 \times 6$ \\ \hline Disk model radius ($R$) & $2$ \\ \hline Number of network nodes ($n$) & $20$ \\ \hline Number of potential relay positions ($Z$) & $16$ \\ \hline \end{tabular} \label{table_sim_parameters} \end{center} \vspace{-0.2in} \end{table} \subsection{Geometric Machine Learning} Generally if the covariance matrices of the spatially-correlated channels are known apriori, beamforming codebook can be designed accordingly. For example, one codeword can be matched to the angular phase of a given covariance matrix. However covariance matrices of relay-user correlated channels are not known apriori, as they depend on the user locations with respect to the optimally-positioned relay~\cite{Clerckx_2008}. Consequently, the beamforming codebook for each relay cannot be designed beforehand, and it needs to be learned based on the user channels within each relay's cluster. Therefore, our goal is to learn the correlation characteristics of each user's channel. For simplicity of explanation, we assume two users, $u=\{1,2\}$, and each one follows a different exponential correlation model, as it depends on its unique location. As our goal is to learn the covariance matrix of each user's channel, we turn our attention to learning the $(\mathbf{h}_{u} \, \mathbf{h}_{u}^H)$ matrix for each user, where $H$ denotes matrix hermitian, as opposed to learning the channel vector $\mathbf{h}_{u}$ itself. As the $(\mathbf{h}_{u} \, \mathbf{h}_{u}^H)$ matrix is an $M \times M$ SPD one, it can be represented as a point over Riemannian manifold. Consequently, learning the two covariance matrices can be conducted over Riemannian manifolds, as opposed to conventional Euclidean spaces. In classifying between the two users, the LEM Riemannian metric will be utilized. In this paper, we propose a \emph{geometric machine learning} approach to learn the covariance matrix of each user by applying the standard SVM model over Riemannian manifold. The proposed geometric SVM classifies the $M \times M$ $\mathbf{h}_{u} \, \mathbf{h}_{u}^H$ SPD channel matrices, for $u=\{1,2\}$, into two groups using the LEM Riemannian distance. Once each of these two groups are constructed, two codewords matching the angles of these two learned covariance matrices are identified. In the testing phase and for any new estimated relay-user channel, it will be classified into one of the two groups and then assigned the corresponding group-specific beamforming codeword. For example, let $\mathbf{h}_{t}$ be a newly estimated channel and it was classified to the $u=1$ group with codeword $\mathbf{c}_1$. The achievable link rate for this channel will be equal to $R_t= \log_2(1+ \text{SNR} \, |\mathbf{h}_{t}^H \, \mathbf{c}_1|^2)$, where SNR is ratio between the signal power to the noise variance. Unlike the recent works on using deep learning for channel estimation (e.g., \cite{gao2019deep}), in which channel learning happens over Euclidean spaces, our proposed \emph{geometric} machine learning approach is tailored to the practical spatially-correlated multiple-antenna channels by learning such SPD matrices over geometric Riemannian manifold. Finally, we point out that we have utilized basic machine learning schemes, such as SVM, as a proof of concept in this paper. However, advanced geometric deep learning algorithms will be utilized in the future. \begin{figure}[htbp] \vspace{-0.0in} \centerline{ \includegraphics[width=0.42\textwidth]{Figs/Motivation1.pdf}} \vspace{-0.05in} \caption{\small Average network flow rate achieved by different relay-positioning optimization metrics.} \label{fig_MF_conn} \vspace{-0.1in} \end{figure} \section{Simulation Results} \label{sim_results} In this section, we present simulation results of the proposed relay-positioning and geometric channel learning schemes. \subsection{LEM-based Relay Positioning and Maximum Flow} Let \emph{$\lambda_2$-based} scheme be the one finding relay positions by maximizing the algebraic connectivity of the graph (e.g., as in~\cite{rahmati2019dynamic}), while \emph{MF-based} scheme is the one positioning relays that achieve average maximum network flow rate~\cite{Edmonds_MFP}. The relay positions in both cases, along with the proposed LEM-based scheme, were found through exhaustive search of all possible relay locations and finding the location vector maximizing the metric of interest in each case. The main network simulation parameters are included in Table~\ref{table_sim_parameters}. Fig.~\ref{fig_MF_conn} shows the achievable average network flow rate by all relay-positioning schemes for $K = 1$ to $5$ relays. As shown, the {$\lambda_2$-based} scheme has a loss of $9\%$ at $K=4$ relays compared to the MF-based one, and this is our motivation to find an alternative optimization metric. Indeed, we find that the proposed LEM-based relay-positioning scheme achieves smaller gap of less than $1\%$ at $K=4$ relays, compared to the \emph{high-complexity} MF-based one. Equally important, our proposed LEM-based scheme has improved the network flow rate of the $\lambda_2$-based one by $9\%$ at $K=4$ relays, while requiring the \emph{same low-complexity} computation. Such gain is simply due to utilizing the regularized Laplacian matrix for calculating the LEM distances over Riemannian manifold, as opposed to computing its second smallest eigenvalue $\lambda_2$. \begin{figure}[htbp] \centerline{ \includegraphics[width=0.43\textwidth]{Figs/MaxFlow_vsFiedler_K5_iter400_1.pdf}} \caption{\small Average network flow rate versus algebraic connectivity. Markers denote performance at $K=0$ to $5$ relays for different relay-positioning optimization metrics.} \label{fig_MF_vs_Lambda} \end{figure} While higher network flow rate is of great importance, the \emph{robustness} of networks, measured in terms of its connectivity degree, is also of equal importance. Fig.~\ref{fig_MF_vs_Lambda} depicts the achievable network flow and algebraic connectivity for the three relay-positioning metrics. For given number of relays ($1 \leq K \leq 5$) and as expected, the maximum flow is achieved by the MF-based positioning algorithm, while the maximum algebraic connectivity is achieved by the $\lambda_2$-based one. Interestingly, Fig.~\ref{fig_MF_vs_Lambda} shows that the performance of proposed LEM-based scheme lies in between these two benchmark schemes. In other words, the proposed LEM-based scheme achieves a unique and balanced \emph{tradoeff} performance between the network flow rate and algebraic connectivity, which is not achievable by any other scheme. Such unique performance is due to the novel consideration of brain-inspired Riemannian geometry in addressing the relay positioning problem. We point out that the line segment between any two markers in~Fig.~\ref{fig_MF_vs_Lambda} is achieved by standard time sharing strategy across two different numbers of deployed relays. \begin{table}[!t] \captionof{table}{\small Average number of overlapping nodes and edges among parallel routes, for different number of $K$ relays.}\label{table_cong} \def2{2} \setlength{\tabcolsep}{7.5pt} \centering \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} &\multicolumn{2}{|c|}{Overlapping Nodes} & \multicolumn{2}{|c|}{Overlapping Edges}\\ \cline{2-5} \hline \multicolumn{1}{|c|}{$K$} & LEM & MF & LEM & MF \\ \cline{2-5}\hline 3 & 1.01 & 1.31 & 0.01 & 0.33 \\ \hline 4 & 0.35 & 0.31 & 0.04 & 0.06 \\ \hline 5 & 0.25 & 0.31 & 0 & 0.03 \\ \hline \end{tabular} \vspace{-0.15in} \end{table} Upon optimally-positioning relays, we consider multi-hop relay-to-relay communication. As our goal is to have independent routes that can occur simultaneously, Table~\ref{table_cong} presents the average number of overlapping nodes and edges among inter-cluster (inter-relay) routes for $K=3$ to $5$ relays. We note that the minimum value of $K=3$ is chosen to allow two parallel routes among all relays Otherwise, there will be only one route among relay(s). Furthermore, routing path between each two relays was determined using the standard Dijkstra's shortest path algorithm~\cite{dijkstra1959note}. As proposed in (\ref{eqn_LEM_routing}), the two chosen parallel routes are the ones with maximum LEM among all potential relay-to-relay routing paths. Table~\ref{table_cong} shows that the LEM-based routing scheme achieves comparable congestion results to that achieved by the MF-based one. In other words, the proposed LEM-based relay-positioning and routing schemes enable parallel routing with minimal congestion levels at nodes and edges, similar to the high-complexity MF-based one. \begin{figure}[htbp] \centerline{ \includegraphics[width=0.43\textwidth]{Figs/Distributed_LEM_1.pdf}} \caption{\small Maximum flow versus algebraic connectivity for distributed LEM implementation considering $K=0$ to $5$ relays.} \label{fig_distributed_LEM} \vspace{-0.1in} \end{figure} The previous results were done assuming a centralized control unit that is aware of all network node locations and it deploys one relay at a time through exhaustive search. An alternative localized approach is the \emph{distributed} one, which partitions the area of interest into a number of non-overlapping regions. Each region has its own local control unit that is aware of node locations within its smaller region. Furthermore, each local unit positions one relay within its region. For example if $K=4$ relays, the area of interest is divided into $4$ equal quarters, and one relay is deployed in each quarter following the LEM-based scheme, presented in Section~\ref{solution}. Fig.~\ref{fig_distributed_LEM} depicts the distributed implementation of LEM-based relay positioning and its performance with respect to the centralized one, presented earlier in Fig.~\ref{fig_MF_vs_Lambda}. As shown, the performance loss due to distributed implementation at $K=4$ is $4.6\%$ in network flow rate and $8.7\%$ in algebraic connectivity. \subsection{Beamforming Codebook Design for Correlated Fading} We assume two users of spatially-correlated channels, each one follows the Clerckx exponential correlation model~\cite{Clerckx_2008} with unique phase values of $\pi$ and $0$. Assuming $M \in \{2,4\}$ antennas, then the correlation covariance matrix of user $u \in \{1,2\}$, denoted as $Q_u^M$, can be written as \begin{equation} Q_u^2 = \begin{bmatrix} 1 & t_u \\ t_u^* & 1 \\ \end{bmatrix} \; \; \;, Q_u^4 = \begin{bmatrix} 1 & t_u & t_u^2 & t_u^3 \\ t_u^* & 1 & t_u & t_u^2 \\ {t_u^*}^2 & t_u^* & 1 & t_u \\ {t_u^*}^3 & {t_u^*}^2 & t_u^* & 1 \\ \end{bmatrix} \;, \label{corr_matrix} \end{equation} where $t_u$ is the transmit correlation coefficient for user $u$. We assume the two users have the same absolute value~\cite{Clerckx_2008}, for example, $|t_1|=|t_2|=0.5$. On the contrary, the phases of the transmit correlation coefficients are different as $\angle t_1 = \pi$ and $\angle t_2 = 0$. Uniform planner array (UPA) deployment of $M \times 1$ antenna structure is deployed at the relay. The geometric LEM-based SVM was applied over Riemannian manifold using the \emph{``geomstats''} python package along with its \emph{brain connectome} classification package~\cite{2018_geomstats}. First, we generate total number of $S$ \emph{training} channel samples, which are equally generated from $Q_1^M$ and $Q_2^M$. Second, the geometric SVM learns the channel covariance matrices of each user, utilizing the LEM. Geometric SVM learning results in two distinguishable groups of channels. Third, we construct $M \times 2$ codebook, having one codeword matched to covariance matrix of each differentiated group. This is done by choosing directional cosine angles of the UPA that result in maximum capacity for each group of channels. For example given the channel covariance matrices, defined in (\ref{corr_matrix}), the chosen codebooks will be $1/\sqrt{2}\begin{bmatrix} 1 & 1 \\ -1 & 1\\ \end{bmatrix}$ for $M=2$ and $1/2\begin{bmatrix} 1 & -1 & 1 & -1 \\ 1 & 1 & 1 & 1 \\ \end{bmatrix}^T$ for $M=4$ antennas. \vspace{0.1in} Fig.~\ref{fig_ML_4ant} depicts the achievable rate of the proposed geometric SVM algorithm for $M=2$ and $4$ antennas, as a function of the training size, $S$. For each $S$ training samples, we generate a \emph{unique} set of $0.4 \, S$ channel samples for testing. We emphasis that the shown rate values are calculated solely based on the testing samples, which have not been used at all in the training phase. It is shown that as the training data size increases, the achievable rate approaches the genie-aided maximum capacity, which is calculated by assigning the best codework to each channel. It is shown in Fig.~\ref{fig_ML_4ant} that more than $90\%$ of the maximum capacity can be achieved by having training size of $S=100$. Finally, Fig.~\ref{fig_ML_4ant} shows that having $M=4$ antennas achieves higher capacity than $M=2$ antennas as more antennas results in additional power gain. \section{Conclusion} \label{conc} In this paper, we have introduced new perspective on designing wireless networks through geometric lens. First, we have utilized Log-Euclidean metric (LEM) for relay positioning over Riemannian manifolds. The proposed LEM-based scheme approaches the maximum flow rate, and it also achieves a unique tradeoff between maximum flow rate and robustness (algebraic connectivity). Second, we have shown that LEM-based inter-relay parallel routes occur with minimal overlapping of nodes or edges. Third, we have shown that a distributed implementation of LEM-based relay positioning only losses $4.6\%$ of the network rate, compared to the centralized one. Finally, we have proposed a geometric support vector machine learning model to classify users spatially-correlated fading channels, and choose a beamforming codeword accordingly. We have shown that more than $90\%$ of the optimal capacity can be achieved by having training size of $100$ channels. \begin{figure}[htbp] \centerline{ \includegraphics[width=0.42\textwidth]{Figs/ML_Rate_2_4ant_1.pdf}} \caption{\small Achievable data rate by the geometric machine learning (GML) scheme versus the optimal one for $2$ and $4$ antennas.} \label{fig_ML_4ant} \vspace{-0.2in} \end{figure} \vspace{0.0in} \footnotesize \setlength{\baselineskip}{15pt} \bibliographystyle{IEEEbib}
2207.04027
\section{Abstract} Audio commands are a preferred communication medium to keep inspectors in the loop of civil infrastructure inspection performed by a semi-autonomous drone. To understand job-specific commands from a group of heterogeneous and dynamic inspectors, a model needs to be developed cost-effectively for the group and easily adapted when the group changes. This paper is motivated to build a multi-tasking deep learning model that possesses a Share-Split-Collaborate architecture. This architecture allows the two classification tasks to share the feature extractor and then split subject-specific and keyword-specific features intertwined in the extracted features through feature projection and collaborative training. A base model for a group of five authorized subjects is trained and tested on the inspection keyword dataset collected by this study. The model achieved a 95.3\% or higher mean accuracy in classifying the keywords of any authorized inspectors. Its mean accuracy in speaker classification is 99.2\%. Due to the richer keyword representations that the model learns from the pooled training data, adapting the base model to a new inspector requires only a little training data from that inspector, like five utterances per keyword. Using the speaker classification scores for inspector verification can achieve a success rate of at least 93.9\% in verifying authorized inspectors and 76.1\% in detecting unauthorized ones. Further, the paper demonstrates the applicability of the proposed model to larger-size groups on a public dataset. This paper provides a solution to addressing challenges facing AI-assisted human-robot interaction, including worker heterogeneity, worker dynamics, and job heterogeneity. \hfill\break% \noindent\textit{Keywords}: human-in-the-loop, human robot interaction, infrastructure inspection, keyword classification, speaker recognition \section{Introduction} Safe, reliable civil infrastructure is a foundation for the nation’s socio-economic vitality. For example, the National Bridge Inventory has 619,588 bridges \cite{2022BridgeReport}, spatially distributed on the 4,192,479 miles of public roads \cite{HighwayStat2020}. The average daily traffic passing the bridges is 4.627 billion \cite{StatusReport24}. However, 42\% of the bridges are over 50-years old, and over 55.1\% are rated as fair or poor \cite{2021ReportCard}, meaning they have shown deterioration. For traffic safety, most bridges are inspected every two years to monitor their health condition closely. In response to the vast demand for bridge inspection and because of the complexity of this mission, aerial robots such as drones are being introduced to improve the time efficiency, worker safety, and cost-effectiveness of inspection. An inspector and a drone form a human-robot system for a bridge inspection. Their collaboration method directly impacts the system's job efficiency and task performance. For example, the requirement on the inspector's psychomotor, cognitive, and sensory abilities is high if the inspector has to operate the drone entirely in the remote control or teleoperation mode \cite{li2022virtual}. The drone is preferred to be at least semi-autonomous with the inspector's assistance or guidance in the loop. Specifically, the drone can automatically perform inspection tasks under predefined conditions, and the inspector will guide the drone or take control of it only when a need is identified. For example, the drone detects an area of concern nearby but off the pre-planned inspection path for a task. The drone hovers there and sends a message to the inspector. The inspector will judge and then tell the robot to either continue its current task or guide the robot to add an incremental task. Human-robot interaction is essential when the robot is semi-autonomous with humans in the loop. Some types of guidance that inspectors give to a semi-autonomous drone, such as triggering, terminating, and slightly modifying a task that the drone performs automatically, can be provided conveniently using a set of commands. There are different media for communicating with the drone, such as speech commands, non-speech commands, remote controllers, and hand gestures. Speech commands have advantages over others because humans use them naturally in daily communication. Therefore, the mapping between speech commands and the drone's actions is intuitive to inspectors. A model is required to analyze the inspector's acoustic signals and classify the command keywords so that the drone can understand the inspector's guidance. Compared to the literature on speech command recognition or keyword spotting, the application to the collaborative human-robot inspection of bridges has unique characteristics or specifications. First, although only a small set of keywords is required, they are job-specific and not necessarily covered by existing speech command big datasets. Therefore, training and refining the model must be efficient, for instance, using a small sample of data collected for any inspection job. Second, a stakeholder such as a State department of transportation usually has a group of inspectors who differ in their background and speaking habits. The model must reliably recognize the speech commands of a group of heterogeneous inspectors, ranging from a few to tens. Third, the drone should only follow the instruction of authorized inspectors but not others for cyber security, which requires the model can recognize and verify the inspector. Last, the model can adapt to workforce dynamics due to promotion, retirement, recruitment, and turnover. Those requirements are commonly present in various industrial applications of human-robot interactions. While the existing literature on speech command and speaker recognition address one or another need from some perspectives, no model has met all the specifications in this particular application setting. This paper proposes a multi-tasking deep learning model, motivated by the specifications on speech command and speaker recognition to keep inspectors in the loop of semi-autonomous drone assisted bridge inspection. Contributions of this paper are reflected by the model development method and resulting model capabilities: \begin{itemize} \item A Share-Split-Collaborate learning architecture that can learn rich keyword representations and extract features for differentiating speakers from their pooled speech command data \item A unified multi-tasking model that can classify spoken keywords and determine who the speaker is if verified as an authorized inspector. \item The model can be developed for any group of inspectors in any specific inspection job in a cost-effectiveness manner. It can also be refined conveniently to adapt to changes in the inspector group. \end{itemize} The rest of the paper is organized as the following. The next section summarizes the related work. The architecture of the proposed multi-tasking model is introduced in Section \ref{sec:Model}, and details of the implementation are discussed in Section \ref{sec:ImplementationDetails}. Section \ref{sec:Results and Analysis} further discusses experiments that demonstrate the model performance and requirements for achieving the performance. At the end, in Section \ref{sec:Conclusions} the paper concludes the study by summarizing insights learned from this study and important future work. \section{The Literature} \label{Sec:Literature} The literature related to this paper includes keyword spotting or speech command recognition, acoustic signal-based speaker recognition, and multi-tasking models that integrate the two tasks into a unified model. \subsection{Keyword Spotting and Speech Command Recognition} Speech command is one of the media to deliver human instruction to robots \cite{goodrich2008human}. Compared to other media such as hardware, gestures, and natural language, speech command is easier to implement. A reliable recognition system for simple commands can be developed quickly. Therefore, it is favored by various applications such as smart home \cite{arriany2016applying} and air traffic control \cite{holone2015possibilities}. Speech command recognition for keeping inspectors in the loop of civil infrastructure inspection performed by a semi-autonomous drone has not been widely developed yet. Along with the growing need for human-machine interactions, the development of lightweight models for recognizing simple commands is gaining growing interest. Keyword spotting is a small-scale speech recognition task to identify keywords from audio streams. In recent decades, deep neural networks have outperformed standard Hidden Markov Models (HMMs) and become a new stream of speech command recognition methods \cite{chen2014small} . For example, convolutional neural networks designed for keyword spotting showed an accuracy of more than 95\% on the Google speech command dataset \cite{tang2018deep,peter2022end}. Two different inputs are mainly used in speech recognition studies: spectrogram (frequency-domain) and waveform (time-domain). Mel-spectrograms are widely used as a standard pre-processing method for various audio-related deep learning models (e.g., \cite{sainath2015convolutional, gong2021ast}). Speakers have unique voices and speaking habits. Therefore, speech recognition models are classified into speaker-dependent and speaker-independent models \cite{gaikwad2010review}. A speaker-dependent model is created for one particular speaker, whereas a speaker-independent model is for various speakers. Although a speaker-dependent model is easier to develop, it is a nontransferable point solution. Maintaining many point solutions is difficult in many real world applications that have multiple model users or users can change quickly. Therefore, speaker-independent models are the mainstream. \subsection{Speaker Recognition} Speaker recognition predicts a speaker's identification according to the speaker's acoustic signals. Past research observed that the performance decreased when the number of speakers increased. For example, the accuracy decreased to 65.3\% with 30 speakers from 96\% with five speakers \cite{chauhan2017speaker}. Recently, deep neural networks trained on large-scale datasets have achieved high accuracy. For example, each of 630 speakers provided six phonetically rich sentences, and the data were used to train a model that has achieved 97.0\% accuracy \cite{lukic2016speaker}. \cite{ye2021deep} trained model using 127,551 utterances collected from 400 speakers, and the accuracy is 98.96\%. \subsection{Multi-tasking Models Attained by Joint Training} Recently, keyword spotting and speaker recognition have been considered two related tasks, not just because acoustic signals contain both phonetic and speaker information, but the needs and benefits of integrating them as a unified model. The association of the two tasks arises from various real-world applications. The study by \cite{sigtia2020multi} was motivated by the need to detect voice triggering phrases and verify if the speaker is a registered user. \cite{el2019joint} aimed to recognize who says what and when in a conversation. Personalized devices, such as hearing assistive devices, require the ability to detect external speakers and prevent them from triggering the device. \cite{lopez2020improved} developed a multi-tasking keyword spotting model with the ability to detect non-users. Joint training of speech and speaker recognition as a unified multi-tasking model usually has the following one or both benefits over training two independent models. First, the two tasks can share the data processing or feature learning pipeline to some extent \cite{sigtia2020multi,lopez2020improved,tang2016collaborative,jung2020multi, hussain2022multi}. Second, each can benefit from the improved performance of the other \cite{tang2016collaborative, jung2020multi}. Voice trigger detection and speaker verification in \cite{sigtia2020multi} are respectively performed by two stacks of four LSTM layers, but they share the first two layers without sacrificing the accuracy compared to two independent models. The two downstream tasks, keyword spotting and speaker verification, in \cite{hussain2022multi} share the same wav2vec v2 backbone. The keyword spotting and own voice/external speaker detection tasks in \cite{lopez2020improved} share the same residual deep learning network for feature extraction. \cite{tang2016collaborative} developed a multi-tasking model for speech and speaker recognition through collaborative training. The two tasks just share a common front-end and they have their respective recurrent neural networks. The two networks are connected at the task level to inform each other of the desired and undesired information. The keyword spotting and speaker verification tasks in \cite{jung2020multi} share an enhancement network for the noise removal. The two tasks have their respective feature extractors, but the acoustic feature extractor provides the phonetic conditional vector to augment the speaker feature extractor's ability. A pooling network further integrates outputs from the two feature extractors to generate the keyword and speaker embeddings. Deep neural networks suffer from the catastrophic forgetting problem in class-incremental learning. This problem also challenges speech recognition, for example, the incremental classes of new accents, new words, or new acoustic environments \cite{fu2021incremental}. Few-shot learning-based speaker identification networks were proposed to handle new speakers \cite{li2020automatic,anand2019few}. The effectiveness of few-shot learning to handle new speakers in a multi-tasking model of speaker-keyword classification has not been verified yet. \section{The Model} \label{sec:Model} A group of $M$ inspectors, indexed by their identification (ID) number $i$, will use one unified model to communicate with their respective drone using the same set of $N$ command keywords, indexed by their class ID $j$. As Figure \ref{Fig:S3 Architecture} shows, an input to the model is an utterance that lasts for a fixed time period, $\pmb{x}\in\mathbb{R}^d$, which may come from one of the $M$ inspectors, indicated by a one-hot coded vector, $\pmb{y}_s\in\mathbb{R}^M$. $\pmb{x}$ may pertain to one of the $N$ keywords, represented by a one-hot coded vector, $\pmb{y}_w\in\mathbb{R}^N$. Given an input $\pmb{x}$, the model predicts the speaker ID, $\hat{\pmb{y}}_s$, and the keyword class, $\hat{\pmb{y}}_w$, in parallel. \begin{figure*}[htb] \centering \includegraphics[width=\columnwidth]{Figure1} \caption{The proposed Share-Split-Collaborate (S$^2$C) multi-tasking framework for the speaker-keyword classification} \label{Fig:S3 Architecture} \end{figure*} \subsection{Architecture of the Multi-tasking Model} \label{subsec:The Multi-tasking Model Architecture} A Share-Split-Collaborate (S$^2$C) deep learning architecture, shown in Figure \ref{Fig:S3 Architecture}, is proposed to build the desired model functions. First, each input utterance $\pmb{x}$ is transferred into a Mel-spectrogram in size $224\times 224 \times 3$. The feature extractor ResNet50 \cite{he2016deep}, pre-trained on the ImageNet and transferred into the speaker-keyword classification tasks, extracts a feature map, $\pmb{F}\in \mathbb{R}^{7\times 7\times 2048}$, from the Mel-spectrogram. Two projection networks split the subject-specific feature vector, $\pmb{f}_s$, and keyword-specific feature vector, $\pmb{f}_w$, respectively from $\pmb{F}$: \begin{equation} \begin{array}{ll} \pmb{f}_s=L(P(\pmb{F}; \pmb{W}_{s,0}, \pmb{b}_{s,0}),\\ \pmb{f}_w=L(P(\pmb{F}; \pmb{W}_{w,0}, \pmb{b}_{w,0}),\\ \end{array} \end{equation} where $P(\pmb{F};\pmb{W},\pmb{b})$ represents a network that projects an input feature map $\pmb{F}$ onto a new space in the same dimensions using the projection matrix $\pmb{W}$ and the bias vector $\pmb{b}$. Here, $\pmb{W}_{s,0}$ and $\pmb{W}_{w,0}$ ($\in\mathbb{R}^{2048 \times 2048}$) are projection weights, and $\pmb{b}_{s,0}$ and $\pmb{b}_{w,0}$ ($\in\mathbb{R}^{2048}$) are projection biases of the two project networks. $L$ reshapes the output feature map into a vector; therefore, feature vectors $\pmb{f}_s$ and $\pmb{f}_w$ $\in \mathbb{R}^{100352}$. The speaker and keyword classification tasks are respectively performed by two separate networks that each consists with two fully connected layers, $\phi$, followed by a softmax function, $\gamma$. After passing the speaker classification network, the subject-specific feature vector, $\pmb{f}_s$, becomes a probability mass function, $\hat{\pmb{y}}_s$ ($\in\mathbb{R}^M$), to capture the likelihood that the input utterance belongs to any of the $M$ inspectors. Similarly, the keyword-specific feature vector, $\pmb{f}_w$, is turned into the probability mass function, $\hat{\pmb{y}}_w$ ($\in\mathbb{R}^N$). Mathematically, the two down-stream classification tasks are: \begin{equation} \hat{\pmb{y}}_n =\gamma(\phi( \phi(\pmb{f}_n;\pmb{W}_{n,1},\pmb{b}_{n,1}, h_{n,1}); \pmb{W}_{n,2},\pmb{b}_{n,2},h_{n,2})),\\ \end{equation} where $n\in\{s,w\}$, $\phi(\pmb{f}; \pmb{W}, \pmb{b}, h)$ represents a fully connected layer with the input vector $\pmb{f}$, the weight matrix $\pmb{W}$, the bias vector $\pmb{b}$, and the activation function $h$. Here, $\pmb{W}_{s,1}\in \mathbb{R}^{100352\times128}$, $\pmb{W}_{s,2}\in \mathbb{R}^{100352\times256}$, $\pmb{W}_{w,1}$ and $\pmb{W}_{w,2}\in \mathbb{R}^{100352\times512}$, $\pmb{b}_{s,1}\in\mathbb{R}^{128}$, $\pmb{b}_{s,2}\in\mathbb{R}^{256}$, $\pmb{b}_{w,1}$ and $\pmb{b}_{w,2}\in\mathbb{R}^{512}$, $h_{s,1}$ is a ReLU function, and $h_{s,2}$, $h_{w,1}$ and $h_{w,2}$ are sigmoid functions. The two rear-end classification networks are determined based on the result from numerical experiments that seek to achieve a stably high performance on the validation dataset. The subject-specific feature vector, $\pmb{f}_s$, should be keyword-agnostic. For that purpose, it is also entered into the keyword classification network to predict the keyword class, $\hat{\pmb{y}}_{sw}\in\mathbb{R}^{N}$. Similarly, the keyword-specific feature vector, $\pmb{f}_w$, is expected to be subject-agnostic. After entering it into the speaker classification network, the speaker ID prediction, $\hat{\pmb{y}}_{ws}\in\mathbb{R}^{M}$, is obtained. These two regulations to support disentangling the two types of features intertwined in the feature map are expressed as: \begin{equation} \hat{\pmb{y}}_{ln} =\gamma(\phi( \phi(\pmb{f}_l;\pmb{W}_{n,1},\pmb{b}_{n,1}, h_{n,1}); \pmb{W}_{n,2},\pmb{b}_{n,2},h_{n,2})),\\ \end{equation} where $l$ and $n\in\{s,w\}$, and $l\neq n$. The data flows for predicting $\hat{\pmb{y}}_{sw}$ and $\hat{\pmb{y}}_{ws}$ in Figure \ref{Fig:S3 Architecture} are dashed arrows, meaning that they are only calculated for the model training purpose. \subsection{The Loss Function for Collaborative Training} The training dataset, $\Omega_{\text{T}}=\{\pmb{x}(k), \pmb{y}_{s}(k), \pmb{y}_{w}(k)|k=1,\dots,K\}$, contains $K$ observations, where $\pmb{x}(k)$ is the input utterance indexed as $k$, and $\pmb{y}_s(k)$ is the truth speaker ID and $\pmb{y}_w(k)$ is the truth keyword class pertaining to $\pmb{x}(k)$. The proposed model predicts the speaker ID $\hat{\pmb{y}}_s(k)$ and the keyword class $\hat{\pmb{y}}_w(k)$. The goal of model training is to fit the feature extractor, the two projection networks, and the two classification networks, achieved by minimizing the loss function, $\mathcal{L}$: \begin{equation} \mathcal{L}=\mathcal{L}_s+\mathcal{L}_w+\mathcal{L}_{sw}+\mathcal{L}_{ws}, \end{equation} which is composed of four components: \begin{equation} \begin{aligned} &\mathcal{L}_s=-\sum_{k=1}^K<\pmb{y}_{s}(k),\log \hat{\pmb{y}}_{s}(k)>,\\ &\mathcal{L}_w=-\sum_{k=1}^K<\pmb{y}_{w}(k), \log \hat{\pmb{y}}_{w}(k)>,\\ &\mathcal{L}_{sw}=\sum_{k=1}^K\|\hat{\pmb{y}}_{sw}(k)-1/N\|_2^2,\\ &\mathcal{L}_{ws}=\sum_{k=1}^K\|\hat{\pmb{y}}_{ws}(k)-1/M\|_2^2. \end{aligned}\\ \end{equation} Here, $<,>$ is the inner product of two vectors, $\mathcal{L}_s$ is a cross-entropy loss penalizing the inaccuracy in classifying speakers by subject-specific features. $\mathcal{L}_w$ is also a cross-entropy loss penalizing the inaccuracy in classifying keywords by keyword features. $\mathcal{L}_{sw}$ regulates the subject-specific features to be keyword-agnostic, meaning that the ideal prediction scores on keyword classes follow a uniform distribution. Similarly, $\mathcal{L}_{ws}$ regulates the keyword-specific features to be subject-agnostic. \subsection{Inspector Verification} \label{subsec:Inspector Verification} Given an input utterance $\pmb{x}$ ($\notin \Omega_{\text{T}}$), the model renders the speaker classification scores $\hat{\pmb{y}}_s$ that measure the probabilities of being each of the $M$ speakers. A verification module can be further developed, which uses the speaker classification result to verify if the speaker is in the pool of authorized inspectors. The pool of unauthorized speakers is infinite. Therefore, a method to detect unauthorized speakers without collecting any data from them is ideal. In predicting the class of a speaker, the model is likely to render prediction scores $\hat{\pmb{y}}_s$ close to a discrete uniform distribution if the speaker is an unauthorized speaker. That is, the speaker is not more likely to be one of the $M$ speakers than the other. However, the speaker classification scores of some hard-to-analyze authorized inspectors may have a similar pattern. This paper develops a method to differentiate unauthorized speakers from authorized inspectors. A measure, $\lambda_v$, defined as the ratio of the highest score $\hat{y}_s^{(1)}$ to the second highest score $\hat{y}_s^{(2)}$ of $\hat{\pmb{y}}_s$: \begin{equation} \lambda_v=\hat{y}_s^{(1)}/\hat{y}_s^{(2)}, \label{eq:lambdav} \end{equation} which quantifies the minimum relative strength of the top ranked prediction score. $\lambda_v$ takes values within the range $[1, \infty)$. The larger the value, the stronger the belief in the top scorer prediction. A threshold needs to be defined appropriately to differentiate unauthorized speakers from authorized inspectors according to $\lambda_v$. A threshold $\lambda$ is defined based on the training dataset; \begin{equation} \lambda=\frac{1}{K}\sum_{k=1}^K\frac{1}{\text{var}[\hat{\pmb{y}}_{s}(k)]} \label{eq:lambda} \end{equation} where $\text{var}[\hat{\pmb{y}}_s(k)]$ designates the variance of $\hat{\pmb{y}}_s(k)$, the classification scores for the person spoken the input utterance $\pmb{x}(k)\in\Omega_{\text{T}}$. A small variance indicates a difficulty in trusting the prediction to be the top scorer. Therefore, Eq. \ref{eq:lambda} indicates the easier that the model classifies speakers, the smaller the value that $\lambda$ takes. A derivation further shows the threshold $\lambda$ takes a value within the range $[M+1+\frac{1}{M-1}, \infty)$, and the lower boundary of the threshold value $M+1+\frac{1}{M-1}$ is approaching $M+1$ as $M$ increases. That is, the lower boundary increases with the number of subjects that the training dataset contains. \subsection{Model Adaption to Worker Changes} \label{subsec:Model Adaption to Worker Changes} If any inspectors leave the group (e.g., retirement, turnover. or on leave), they become inactive users of the model. Regarding these changes, the speaker-keyword classification model does not have to be updated. A speaker recognized as an inactive inspector is seen as ``others not on duty''. However, if new inspectors join the group (e.g., new hires or contractors), the model must be calibrated for two reasons. First, the speaker classification network will have one or multiple new classes. Second, adding the data from the new inspectors to the training dataset may further improve the model's ability to classify keywords, particularly when the original training dataset contains only a few subjects. To calibrate the model regarding newly added inspectors, a small amount of data will be collected from the new inspectors and added to the training dataset. For example, the additional training data can be collected by letting each new inspector say every keyword for five times. In the calibration of the model, the keyword-specific feature projection network, the keyword classification network, and the feature extractor will use their current weights as the initial values. The underlying rationale is that those networks are at least near optimal before the calibration, and the updated training dataset may help refine them to become optimal. However, the subject-specific feature projection network and the speaker classification network will use randomly assigned weights as the initial values. Re-training these two networks from scratch would avoid the issue of forgetting existing inspectors when learning to recognize new inspectors. \section{Implementation Details} \label{sec:ImplementationDetails} \subsection{The Data} The study collected inspection command data spoken by eight subjects. They will be trained to guide a semi-autonomous drone to perform a bridge inspection job that consists of four tasks using a virtual reality based training system \cite{li2022virtual}. In this study, the drone can automatically perform the four tasks by flying along the pre-planned routes of the tasks with GPS-based navigation and basic obstacle avoidance functions. The start and termination of a task, as well as certain deviations from the pre-planned path for the task, must be guided by an inspector. Ten keywords were collected and summarized in Table \ref{tab:keywords}, which are in three categories. ``BIRDS'' is the name of the drone and it is the wake-up command for an inspector to trigger the communication with the drone. The communication is on until a silence of over two seconds is detected. The inspector uses the command ``Task $i$'' ($i=$ One, Two, Three, Four) to let the drone start a specific task. Therefore, there are five keywords falling in the category of assignment commands. Four additional single-word commands allow the inspector to modify the automatic inspection mode. The inspector can use the command ``Backward'' to ask the drone to move reversely along the pre-defined path. The drone will stay still if it receives the command ``Hover''. The drone will continue performing the uncompleted task automatically if the inspector says ``Continue''. The command ``Stop'' will terminate the current task and let the inspector take control of the drone. This list of keywords is an example developed based on one inspection job. Commands can be developed for any inspection job with unique specifications. \begin{table}[htbp] \centering \caption{List of keywords in an inspection job} \begin{tabular}{l|l} \hline Category& Keywords\\ \hline Wakeup & ``BIRDS"\\ Assignment & ``Task'', ``One'', ``Two'', ``Three'', ``Four''\\ Adjustment & ``Backward'', ``Continue'', ``Hover'', ``Stop''\\ \hline \end{tabular} \label{tab:keywords} \end{table} Half of the eight subjects in this study are female and the other half are male. Each of them repeated each of the ten keywords about 50 times. Utterances that each contains a keyword and lasts for 1.5 seconds are extracted from the recorded audio signals. The utterances are all transformed into Melspectrogram images. The dataset in the Melspectrogram format can be downloaded from the project webpage \cite{SpeechData_Github}. Figure.\ref{fig:MEL} illustrates one image of each keyword from every subject. The similarity of the images in the same row and dissimilarity among those in the same column are both observed, indicating keyword-specific features are present and can be extracted from the images for classifying keywords. Meanwhile, inter-subject variation is present in each row, which means subject-specific features are intertwined with keyword-specific features. \begin{figure}[htbp] \centering \includegraphics[width=0.6\columnwidth]{Melspectrogram_examples.png} \caption{Melspectrogram examples by subject and by keyword} \label{fig:MEL} \end{figure} \subsection{Model Training} A speaker-keyword classification model is trained and refined based on the following study scenario. The initial group has five authorized inspectors. The model proposed in Section \ref{subsec:The Multi-tasking Model Architecture} is trained, validated, and tested for this group using their data. The data collected from each subject and on each keyword are split into three subsets for three purposes: training (60\%, $\sim$30 utterances), validation (20\%, $\sim$10 utterances), and testing (20\%, $\sim$10 utterances). Developing a satisfying model does not necessarily use up all the data reserved for model training. The requirement on the training data size will be discussed in the next section. The initial model is trained in two stages. In the first stage, the feature extractor that has already been pre-trained on ImageNet is frozen, and the other four networks are trained from the scratch for up to ten epochs. In the second stage, the feature extractor is unfrozen and all the five networks are refined for up to ten epochs. Any of the two training stages may be terminated earlier if the validation accuracy is no longer improved for at least five epochs. The initial model is the one that achieves the lowest loss from the second stage. Later on, a remaining inspector joins the group, and the initial model is calibrated accordingly by following the method delineated in Section \ref{subsec:Model Adaption to Worker Changes}. A hypothesis of this study is that the model calibration regarding the worker dynamics requires less data than training the initial model usually built for a small group. Therefore, only five utterances of each keyword from the new inspector are added to the training data for calibrating the model. Experiments in the next section validate that this small amount is sufficient. Still, about ten utterances per keyword from each prospective new inspector is used for the testing purpose to maintain the even distribution of testing data on subjects and keywords. The model calibration for the incremental class of inspector runs for ten epochs. \section{Results and Discussion} \label{sec:Results and Analysis} \subsection{Model Development Efficiency} To determine the requirements on training data size and training time for achieving a satisfying classification accuracy, the speaker-keyword classification model is trained and tested on the keyword data collected from five subjects (sub1$\sim$sub5). Table \ref{tab:trainingcost} summarizes four models developed on training data in different sizes. Each model is trained and tested 10 times to obtain the average accuracy. The dataset for training the first model contains 10 utterances per keyword (/kw) from each of the five subjects, indicated by a vector (10, 10, 10, 10, 10). The overall keyword classification accuracy is 0.951, and the accuracy at the subject-level ranges from 0.878 to 0.991. The speaker classification accuracy is 0.969, and the accuracy of recognizing a specific subject ranges from 0.917 to 0.997. The model's ability to analyze subject 2 is clearly lower than that of others. Subject 2 seems quite different than others by looking at Figure \ref{fig:MEL}. To improve the accuracy for subject 2, the second model is developed by doubling the training data. This time, the average accuracy of classifying the keywords of subject 2 is effectively increased to 0.927, and the average accuracy of recognizing subject 2 is increased to 0.987. If 0.95 is a desired accuracy level, the second model has not achieved a satisfying accuracy in classifying the keywords of subject 2, which motivates the third model that adds additional 10 utterances of each keyword from subject 2 to the training data. This time, the keyword classification accuracy is at least 0.953 for every subject and the speaker recognition accuracy is at least 0.984. Compared to the fourth model that uses 30 utterances per keyword from all subjects, the third model is developed with less training data but with comparable performance. Results in Table \ref{tab:trainingcost} indicate collecting about 20 utterances per keyword from each subject would be required to develop a model for a small group of inspectors and, in some circumstances, a little more data from a difficult-to-analyze inspector would be helpful. In the remainder of the paper, the third model, trained using 30 utterances per keyword from subject 2 and 20 utterances per keyword from the other four subjects, is used as the base model for further discussion. Training the base model is efficient, taking only 63 seconds. Transferring the pre-trained ResNet50 into this study as the feature extractor is an important reason for achieving the time efficiency. The inference speed of the model is reasonable, about 0.11 seconds per Melspectrogram. Adding the required time to convert an utterance into a Melspectrogram, which takes about 0.03$\sim$0.04 seconds, the speed for analyzing the acoustic signal is about seven utterances per second. \begin{table*}[htbp] \centering \caption{Impact of training data size on the accuracy of speaker-command classification} \resizebox{\linewidth}{!}{% \begin{tabular}{c|c|rrrrr|r|rrrrr|r} \hline Trn. Data&Avg. Trn.&\multicolumn{6}{c|}{Avg. Accuracy of Keyword Classification}&\multicolumn{6}{c}{Avg. Accuracy of Speaker Classification}\\ \cline{3-14} \multicolumn{1}{c|}{Size (/kw)}&Time (sec)&sub1&sub2&sub3&sub4&sub5&group&sub1&sub2&sub3&sub4&sub5&overalll\\ \hline (10,10,10,10,10)&45&0.946&0.878&0.966&0.977&0.991&0.951&0.994&0.917&0.977&0.959&0.997&0.969\\ (20,20,20,20,20)&57&0.970&0.927&0.988&0.988&0.999&0.974&0.991&0.987&0.992&0.987&1.000&0.991\\ (20,30,20,20,20)&63&0.973&0.953&0.989&0.988&1.000&0.980&0.984&0.999&0.993&0.986&1.000&0.992\\ (30,30,30,30,30)&74&0.985&0.958&0.989&0.988&1.000&0.984&0.982&0.994&0.997&0.990&1.000&0.993\\ \hline \end{tabular} } \label{tab:trainingcost} \end{table*} \subsection{Benefits of Pooled Training Data} The between-subject variation of a keyword's feature vector is always present. Therefore, a model trained on a big volume of data collected from just one subject would not be effective for classifying the same set of keywords for other subjects. Pooling the data from a group of subjects has two advantages. On one hand, it allows for learning richer keyword representations. On the other hand, the between-subject variation can be utilized to differentiate subjects. As a result, a unified model can be developed from the pooled data to substitute a set of point solutions that each is dedicated to one subject. To illustrate the benefits of pooled training data, 18 models in 6 groups are developed. Table \ref{tab:datapooling} summarizes their average accuracy in classifying the keywords for each of the five subjects. The average value is based on ten repetitions of the model training and testing. The first group consists of three models that are trained on subject 1's data in different sizes. When the training data size is 10 utterances per keyword, the average accuracy in classifying the keywords for subject 1 is 0.937, but the average accuracy in classifying the same keywords of other subjects ranges from 0.341 to 0.630. Adding more data collected from subject 1 to the training dataset will not effectively improve the classification accuracy for other subjects. For example, when the training data size is tripled, the average accuracy of classifying the keywords of other subjects is improved by 0.09 or less, far below a satisfying level. The non-transferability is consistently observed among other models trained on the data collected from one subject. The last group of models are trained by pooling data from all five subjects. For instance, the training dataset that contains only 6 utterances per keyword from each of the 5 subjects is of the same size as the one that consists of 30 utterances per keyword from only one subject. With the pooled training dataset, the average accuracy in classifying the keywords of subject 2 is 0.821 and 0.935 or higher for other subjects, shown in Table \ref{tab:datapooling}, and the average accuracy in classifying speakers is 0.953. In the scenario of collecting 10 utterances per keyword from the 5 subjects for model development, the unified model developed on the pooled dataset has outperformed 4 out of 5 models developed respectively using the unpooled data. The comparison in Table \ref{tab:datapooling} verifies the effectiveness of pooling data to learn richer keyword representations. \begin{table}[htbp] \centering \caption{Effectiveness of pooled training data} \begin{tabular}{l|rrrrr} \hline \multicolumn{1}{l|}{Taining Data}&\multicolumn{5}{c}{Avg. Accuracy of Keyword Classification}\\ \cline{2-6} Size (/kw)&sub1&sub2&sub3&sub4&sub5\\ \hline (10,\;0,\;0,\;0,\;0)&0.937&0.341&0.515&0.604&0.630\\ (20,\;0,\;0,\;0,\;0)&0.983&0.378&0.565&0.649&0.633\\ (30,\;0,\;0,\;0,\;0)&0.998&0.428&0.570&0.694&0.663\\ \hline (0,\,10,\;0,\;0,\;0)&0.219&0.848&0.319&0.335&0.272\\ (0,\,20,\;0,\;0,\;0)&0.246&0.911&0.311&0.319&0.286\\ (0,\,30,\;0,\;0,\;0)&0.286&0.938&0.346&0.392&0.316\\ \hline (0,\;0,\,10,\;0,\;0)&0.516&0.396&0.993&0.635&0.622\\ (0,\;0,\,20,\;0,\;0)&0.481&0.380&0.993&0.668&0.665\\ (0,\;0,\,30,\;0,\;0)&0.492&0.358&0.992&0.691&0.647\\ \hline (0,\;0,\;0,\,10,\;0)&0.567&0.414&0.609&0.975&0.643\\ (0,\;0,\;0,\,20,\;0)&0.562&0.397&0.574&0.984&0.691\\ (0,\;0,\;0,\,30,\;0)&0.598&0.471&0.627&0.990&0.711\\ \hline (0,\;0,\;0,\;0,\,10)&0.488&0.328&0.583&0.688&0.963\\ (0,\;0,\;0,\;0,\,20)&0.502&0.289&0.588&0.686&0.989\\ (0,\;0,\;0,\;0,\,30)&0.532&0.252&0.616&0.733&0.986\\ \hline (6,\;6,\;6,\;6,\;6)&0.935&0.821&0.940&0.941&0.966\\ (10,10,10,10,10)&0.946&0.878&0.966&0.977&0.991\\ (20,30,20,20,20)&0.973&0.953&0.989&0.988&1.000\\ \hline \end{tabular} \label{tab:datapooling} \end{table} \subsection{Model Adaptability to New Inspectors} When a new inspector is joining the inspector group, collecting a small amount of additional training data from the new inspector would be sufficient to adapt the model to the changed inspector group. To verify the requirement for the additional training data from a new inspector, the base model is respectively calibrated for each of the three remaining subjects (i.e., sub6$\sim$sub8) using two sizes of training data: 5 and 10 utterances per keyword from the new subject. Results are summarized in Table \ref{tab:owndata_adaptability}. The accuracy of the base model in classifying the keywords of subject 6 is 0.700. When subject 6 becomes a new inspector, the model is calibrated by adding just 5 utterances per keyword collected from the new subject to the training dataset. The accuracy of the calibrated model in classifying the keywords of subject 6 becomes 0.970, and the accuracy for the existing five subjects has no significant change. If subject 7 is the new subject, collecting 10 instances per keyword from the subject to calibrate the model effectively increases the accuracy from 0.464 to 0.968. Collecting 5 instances per keyword from subject 8 can effectively adapt the base model to this subject, manifested by the increase of the accuracy from 0.267 to 0.990. The additional 5 to 10 utterances per keyword from the new subject are also sufficient for the model to obtain the ability to recognize the new subject with an accuracy near 1.000 without forgetting the existing subjects. The study verifies the efficiency and effectiveness of adapting an existing model to new inspectors. Table \ref{tab:owndata_adaptability} also indicates that a model's ability to classify keywords of unauthorized subjects would increase if more subjects are included in the training data. For example, the accuracy of the base model in classifying the keyword of subject 8 is 0.267. After the base model is adapted to the bigger group that consists of subjects 1$\sim$6, the accuracy of the updated model in classifying the keywords of subject 8 is increased to 0.336. Therefore, the model adaptability on a public dataset is further examined in a later section. \begin{table*}[htbp] \centering \caption{Requirement on the training data size for model adaption} \begin{tabular}{c|r|rrrrr|rrr} \hline \multicolumn{2}{c|}{Additional Trn. Data}&\multicolumn{5}{c|}{Authorized Subjects}&\multicolumn{3}{c}{New Subjects}\\ \hline Source&Size (/kw)&sub1&sub2&sub3&sub4&sub5&sub6&sub7&sub8\\ \hline \multirow{2}{*}{sub6}&5&0.973&0.981&0.991&1.000&1.000&0.970&0.632&0.336\\ &10&0.955&0.972&0.991&1.000&1.000&0.980&0.648&0.316\\ \hline \multirow{2}{*}{sub7}&5&0.973&0.981&0.991&1.000&1.000&0.760&0.936&0.287\\ &10&0.973&0.981&0.982&1.000&1.000&0.760&0.968&0.297\\ \hline \multirow{2}{*}{sub8}&5&0.964&0.981&0.991&1.000&1.000&0.580&0.648&0.990\\ &10&0.973&0.981&0.991&1.000&1.000&0.650&0.664&1.000\\ \hline \multicolumn{2}{c|}{Base model}&0.973&0.963&0.982&0.990&1.000&0.700&0.464&0.267\\ \hline \end{tabular} \label{tab:owndata_adaptability} \end{table*} \subsection{Effectiveness of Inspector Verification} The effectiveness of the inspector verification method delineated in Section \ref{subsec:Inspector Verification} is illustrated using the base model. Therefore, subjects 1$\sim$5 are authorized inspectors, and subjects 6$\sim$8 are unauthorized ones. The threshold value $\lambda_v$ in Eq.\ref{eq:lambdav} for the base model is 7.048, close to its lower boundary of 6.25. If the ratio value $\lambda$ calculated based on a speaker classification result is greater than this threshold value, the inspector is predicted as an authorized inspector. Tje test data of the eight subjects are used to test the performance of the inspector verification method. Table \ref{tab:speaker verification} summarizes the distribution of $\lambda$ value by subject. In total, 869 utterances are tested, with 543 from the authorized inspectors (sub1$\sim$sub5) and 326 from the unauthorized speakers (sub6$\sim$sub8). The statistic measures of $\lambda_v$ in Table \ref{tab:speaker verification} clearly differentiate the two groups of subjects, verifying the rationale of using $\lambda_v$ defined in Eq.(\ref{eq:lambdav}) as a measurement for inspector verification. \begin{table}[htbp] \centering \caption{Statistics of Inspector Verification} \begin{tabular}{c|R{1.5cm}|R{0.9cm}R{1cm}R{1cm}R{1.1cm}R{1.1cm}R{1cm}|R{1cm}R{1.3cm}} \hline &&\multicolumn{6}{c|}{$\lambda_{v}$}&\multicolumn{2}{c}{$\{\lambda_{v}>\lambda\}$}\\ \cline{3-10} Speaker&Test Sample Size&Min&Q1 & Q2 & Q3 &Max &Mean& Count&Percent \\ \hline sub1&111 & 1.095 & 40.398 & 71.524 & 103.043 & 131.426 & 70.622 & 106 & 95.5\%\\ sub2&108 & 1.233 & 47.204 & 85.793 & 123.482 & 160.728 & 82.809 & 104 & 96.3\%\\ sub3&115 & 1.988 & 14.839 & 39.180 & 54.645 & 72.526 & 36.808 & 102 & 88.7\%\\ sub4&109 & 1.801 & 24.304 & 37.871 & 48.019 & 64.243 & 35.426 & 101 & 92.7\%\\ sub5&100 & 3.641 & 37.738 & 49.482 & 57.686 & 81.912 & 46.321 & 97 & 97.0\%\\ sub6&100 & 1.033 & 1.846 & 3.726 & 11.555 & 49.508 & 8.560 & 36 & 36.0\%\\ sub7&125 & 1.003 & 1.390 & 2.984 & 5.802 & 33.536 & 5.554 & 28 & 22.4\%\\ sub8&101 & 1.007 & 1.439 & 2.258 & 4.408 & 18.673 & 3.581 & 14 & 13.9\%\\ \hline \end{tabular} \label{tab:speaker verification} \end{table} Table \ref{tab:Inspector Verification} presents the confusion matrix of inspector verification. 510 out of the 543 utterances from the authorized inspectors are predicted correctly, indicating the chance that the base model can correctly verify authorized inspectors is 93.9\%. 248 out of 326 utterances from the unauthorized inspectors are predicted correctly, which means the chance of successfully detecting an unauthorized speaker is 76.1\%. As a result, the precision of inspector verification is 86.7\% (=510/588), and the precision of unauthorized speaker detection is 88.3\% (=248/281). The result in Table \ref{tab:Inspector Verification} indicates the proposed inspector verification method is effective. A model owner can adjust the threshold value according to the specific situation of implementation. For example, slightly increasing the threshold value $\lambda_v$ allows to increase the sensitivity of detecting unauthorized speakers, but it lowers the accuracy in verifying authorized inspectors. The lowered verification accuracy can be improved by adopting a temporal coherence analysis that verifies a speaker according to a sequence of acoustic inputs from the speaker rather than a single input. \begin{table}[htbp] \centering \caption{The confusion matrix of the base model's inspector verification result} \begin{tabular}{l|r|rr|r} \hline \multicolumn{2}{c}{}&\multicolumn{2}{c|}{Prediction}&\\ \cline{3-4} \multicolumn{2}{c}{}&Authorized& Unauthorized &Total\\ \hline \multirow{2}{*}{Ground Truth}&Authorized&510&33&543\\ &\multicolumn{1}{c|}{Unauthorized}&78&248&326\\ \hline \multicolumn{2}{r|}{Total}&588&281&869\\ \hline \end{tabular} \label{tab:Inspector Verification} \end{table} \iffalse The effectiveness of the inspector verification method delineated in Section \ref{subsec:Inspector Verification} is illustrated using the base model. Therefore, subjects 1$\sim$5 are authorized inspectors and subjects 6$\sim$8 are unauthorized ones. The threshold value $\lambda_v$ in Eq.\ref{eq:lambdav} for the base model is 7.048, close to its lower boundary 6.25. If the ratio value $\lambda$ calculated based on a speaker classification result is greater than this threshold value, the inspector is predicted as an authorized inspector. The data for testing the inspector verification method are sampled from the unused data of the eight subjects. 10 utterances per keyword are selected from each subject if the remaining data are sufficient. Table \ref{tab:speaker verification} summarizes the distribution of $\lambda$ value by subject. In total, 736 utterances are tested, with 436 from the authorized inspectors (sub1$\sim$sub5) and 300 are from the unauthorized speakers (sub6$\sim$sub8). The statistic measures of $\lambda_v$ in Table \ref{tab:speaker verification} clearly differentiate the two groups of subjects, verifying the rationale of using $\lambda_v$ defined in Eq.(\ref{eq:lambdav}) as a measurement for inspector verification. \begin{table}[htbp] \centering \caption{Inspector Verification} \begin{tabular}{c|r|R{0.9cm}R{1cm}R{1cm}R{1.1cm}R{1.1cm}R{1cm}|R{1cm}R{1.1cm}} \hline &&\multicolumn{6}{c|}{$\lambda_{v}$}&\multicolumn{2}{c}{$\{\lambda_{v}>\lambda\}$}\\ \cline{3-10} Speaker&Test Sample Size&Min&Q1 & Q2 & Q3 &Max &Mean& Count&Percent \\ \hline sub1&100 & 1.205 & 30.005 & 73.502 & 98.098 & 132.562 & 66.919 & 94 &94.0\%\\ sub2&41 & 2.805 & 66.049 & 93.475 & 117.738 & 170.709 & 92.094 & 40&97.6\%\\ sub3&100 & 1.589 & 18.602 & 33.638 & 48.487 & 75.667 & 34.286 & 92&92.0\%\\ sub4&100 & 1.9 & 23.937 & 39.201 & 49.111 & 63.898 & 36.5 & 95&95.0\%\\ sub5&95 & 1.351 & 37.007 & 49.963 & 58.194 & 80.932 & 46.651 & 91&95.8\%\\ sub6&100 & 1.019 & 1.844 & 2.605 & 7.409 & 46.776 & 7.264 & 26&26.0\%\\ sub7&100 & 1.008 & 1.477 & 2.659 & 7.994 & 34.871 & 5.979 & 27 &27.0\%\\ sub8&100 & 1.01 & 1.524 & 2.192 & 4.06 & 21.761 & 3.675 & 10 &10.0\%\\ \hline \end{tabular} \label{tab:speaker verification} \end{table} Table \ref{tab:Inspector Verification} presents the confusion matrix of the base model's inspector verification. 412 out of the 436 utterances from the authorized inspectors are predicted correctly, indicating the chance that the base model can correctly verifies authorized inspectors is 94.5\%. 241 out of 300 utterances from the unauthorized inspectors are predicted correctly, which means the chance of correctly detecting unauthorized speaker is 79.0\%. As a result, the precision of inspector verification is 86.7\% (=412/475), and the precision of unauthorized speaker detection is 90.8\% (=237/261). The result in Table \ref{tab:Inspector Verification} indicates the proposed inspector verification method is effective. A model owner can adjust the threshold value according the specific situation of implementation. For example, slightly increasing the threshold value $\lambda_v$ allows to increase the sensitivity of detecting unauthorized speakers, but it lowers the accuracy in verifying authorized inspectors. The lowered verification accuracy can be improved by adopting a temporal coherence analysis that verifies a speaker according to a sequences of acoustic inputs from the speaker rather than a single input. \begin{table}[htbp] \centering \caption{The confusion matrix of the base model's inspector verification result} \begin{tabular}{l|r|rr|r} \hline \multicolumn{2}{c}{}&\multicolumn{2}{c|}{Prediction}&\\ \cline{3-4} \multicolumn{2}{c}{}&Authorized& Unauthorized &Total\\ \hline \multirow{2}{*}{Ground Truth}&Authorized&412&24&436\\ &\multicolumn{1}{c|}{Unauthorized}&63&237&300\\ \hline \multicolumn{2}{r|}{Total}&475&261&736\\ \hline \end{tabular} \label{tab:Inspector Verification} \end{table} \fi \subsection{Impacts of Larger Group Sizes} To demonstrate the applicability of the proposed speaker-keyword classification model to groups in larger sizes, an online audio dataset \cite{becker2018interpreting} is analyzed. This dataset includes 30,000 utterances of digits 0$\sim$9 collected from 60 different speakers who spoke every digit 50 times. Firstly, an initial model is developed for a group of five subjects. To keep consistent with the inspection command example in this paper, the dataset for training this classification model uses the same data size. That is, the initial training dataset contains 20 utterances per keyword from each of the five subjects. The validation dataset and the test dataset respectively have 10 utterances per keyword from the subjects. The initial group of authorized subjects is expanded by adding one subject at once until reaching the size of 30 authorized subjects. When adapting the model to a new subject, 5 utterances per keyword are collected from the subject and added to the training dataset to calibrate the model. In total, 26 models are developed, from a 5-subject group to a 30-subject group. Subjects whose data are included in the training dataset are named the authorized group and those whose data are not included are the unauthorized group. In this experiment, subjects 51$\sim$60 form a group of 10 unauthorized subjects. To verify that the keyword classification is not negatively affected by the increasing number of authorized subjects, the 95\% confidence intervals of each model's keyword classification accuracy for authorized subjects and unauthorized subjects are respectively shown in Figure \ref{fig:exsiting_performance}. The mean accuracy of classifying the keywords spoken by the authorized subjects is near 1, and adding more and more subjects to the group does not change the mean accuracy of keyword classification. The initial model's mean accuracy in classifying the keywords spoken by the unauthorized speakers is 0.927. It demonstrates a growing trend if the number of authorized subjects in the training dataset increases, reaching 0.975 when the size reaches 30 subjects. Subjects 52, 57, and 60 are the three unauthorized subjects whose spoke keywords are classified by the initial model with a 0.86$\sim$0.88 accuracy. When the number of authorized subjects is increased to 30, the mean accuracy in classifying the keywords of subjects 57 and 60 is 0.89, but it is 1 for subject 52. The difference between the two groups' mean accuracy is anticipated to diminish when the training dataset contains data from more and more subjects. However, the interval estimate of the classification accuracy for the unauthorized group is clearly wider than that of the authorized group. Figure. \ref{fig:digit_Data_performance} confirms that pooling data from more subjects would reduce the gap of mean accuracy in keyword classification between the two groups, but not the difference in their variances of accuracy. To achieve a reliable classification result, it is recommended to include a small sample of training data from any new inspectors. \begin{figure}[htb] \centering \includegraphics[width=0.6\columnwidth]{digit_Data_performance.png} \caption{Mean value of subject-level keyword classification accuracy and the 95\% interval estimate: authorized subjects vs. unauthorized subjects} \label{fig:digit_Data_performance} \end{figure} It is anticipated that classifying speakers will become more challenging when the number of authorized inspectors keeps increasing. But this difficulty can be addressed by collecting more training data from the incrementally added subjects. To verify this hypothesis, Figure \ref{fig:exsiting_performance} shows the mean value of subject-level speaker classification accuracy and the 95\% interval estimate under two scenarios: the training data contains 5 utterances per keyword from each incrementally added subject vs. 20 utterances per keyword. When only 5 utterances per keyword are collected from each sequentially added subject, the mean accuracy clearly demonstrates a decreasing trend when the number of authorized subjects increases, and the interval estimate becomes wider. If more training data are collected from newly added subjects, the decreasing trend of mean accuracy slows down and the interval estimate becomes narrower. Figure \ref{fig:exsiting_performance} implies that adapting the speaker-keyword classification model to a larger group of subjects may require more training data due to the challenge of classifying speakers in a larger sized group. \begin{figure}[htbp] \centering \includegraphics[width=0.6\columnwidth]{exsiting_performance.png} \caption{Mean value of subject-level speaker classification accuracy and the 95\% interval estimate: 5 vs. 20 utterances per keyword from each newly added subject} \label{fig:exsiting_performance} \end{figure} \section{Conclusions} \label{sec:Conclusions} This paper aimed to develop a multi-tasking deep learning model to classify the keywords of spoken commands for guiding a semi-autonomous drone in inspection and, simultaneously, recognize inspectors who spoke the commands. To achieve the goal, a Share-Split-Collaborate (S$^2$C) learning architecture was designed and found to be effective in this study. With the S$^2$C architecture, the two classification tasks can share the feature extractor, and the subject-specific and keyword-specific feature intertwined in the extracted features can be split through feature projection and collaborative training. The model is trained on pooled data collected from the group of authorized inspectors who will use the model to stay in the loop of the drone performed inspection. From the pooled training data, the model learns richer keyword representations and the features to differentiate inspectors. This study collected an inspection keyword dataset from 8 subjects to illustrate the proposed model. The dataset contains 10 keywords that each of them is repeated about 50 times by every subject. A base model for a group of 5 authorized subjects was developed on a training dataset composed of 20 utterances per keyword from each of the 5 subjects and a little more from a hard-to-analyze subject. The model achieved 95.3\% or higher average accuracy in classifying the keywords of every subject in the group and a 99.2\% average accuracy in classifying the subjects. The proposed model has effectively addressed the nontransferrability of point solutions that each is trained on the data from one subject and can be used only by that subject due to the between-subject variation of keyword features. Consequently, adapting the base model to a new subject only requires collecting a small amount of training data from the new subject, like 5 utterances per keyword. The speaker prediction scores were also used to verify if the command is given by an authorized inspector. The base model's success rate in verifying authorized subjects is $93.9\%$ and $76.1\%$ in detecting unauthorized subjects. The proposed model was further trained and tested on a public audio dataset collected from 60 subjects. The proposed model is applicable to large-size groups, manifested by the consistently high and reliable keyword classification accuracy. Speaker classification will become more challenging when the group size is large, indicated by the decreased mean value and the enlarged variance of subject-level speaker recognition accuracy. At the minimum, collecting sufficient training data from the subjects can address the challenge effectively. Implementing the classification model still requires additional efforts that go beyond the study score of this paper. For example, an additional module needs to be created to locate command-related segments from stream data and extract keyword utterances from the segments. Every working environment is unique and every inspector is special. How to further improve the model's adaptability to noises in open-working environments and to the largely varied spoken habits of inspectors is a research question. Effective data augmentation and model adaptation methods could be possible solutions. The current study assumes a bijective relationship between the audio commands and the actions of the drone. A more user-friendly approach to communication requires a surjective function that allows for mapping variants of each command from inspectors to the corresponding action of the drone. If new inspectors to be included as authorized model users propose adding new keywords, class-incremental learning will happen to both classification tasks. This paper provides an opportunity for exploring those new research questions. \section*{Acknowledgment} Qin and Li are supported by National Science Foundation through the award ECCS-\#2026357. Yao is supported by the SBU-BNL seed grant (1168726-9-63845) and National Science Foundation through the award ECCS-\#2129673. \vspace{0.1in} \bibliographystyle{plain}
1708.08187
\section{Introduction} \label{sec:introduction} Metasurfaces have demonstrated in the past few years an exceptional ability to implement a myriad of electromagnetic functionalities, forming highly-efficient ultrathin devices for engineered beam refraction \cite{Pfeiffer2013,Monticone2013,Selvanayagam2013,Epstein2016_3}, reflection \cite{Cui2014,Asadchy2015,Asadchy2016,Estakhri2016_1,Epstein2016_4}, focusing \cite{Lin2014,Aieta2015}, polarization manipulation \cite{Zhao2012,Pfeiffer2014_1,Pfeiffer2014_3,Achouri2015_1,Yin2015}, controlled absorption \cite{Wakatsuchi2013,Asadchy2015_1,Radi2015}, cloaking \cite{Monti2012, Sounas2015, Vellucci2017}, and advanced radiation pattern molding \cite{Pfeiffer2015_1,Epstein2016,Epstein2017,Raeker2016,Raeker2017,Minatti2016_1}, to name a few. These devices are typically designed by prescribing suitable continuous metasurface constituents (\emph{macroscopic} design), implementing a desirable field transformation via the corresponding generalized sheet transition conditions (GSTCs) \cite{Kuester2003, Tretyakov2003, Epstein2016_2,Estakhri2016}. Subsequently, the continuous design specifications are discretized into subwavelength unit cell sizes, and realized using appropriate polarizable particles (\emph{microscopic} design). While numerous efficient semianalytical \emph{macroscopic} design methods were developed in recent years (e.g., \cite{Epstein2014, Pfeiffer2014_3, Epstein2016_3, Ranjbar2017, Asadchy2016,Estakhri2016_1}), allowing conceptual implementation of advanced field transformations via metasurfaces, translating the latter into physical structures remains a significant challenge. Most of the \emph{microscopic} design schemes rely on full-wave numerical simulations to associate a given subwavelength structure with its equivalent meta-atom constituents, yielding a lookup table that is utilized for general metasurface realization. However, whether in microwave or optical frequencies, bianisotropic metasurfaces, typically necessary for complex beam manipulation, require simultaneous tuning of multiple degrees of freedom at the meta-atom level \cite{Zhao2012, Pfeiffer2014_3, Achouri2015_1, Epstein2016_3, Alaee2015, Alaee2015_1, Odit2016, Kim2016, Asadchy2016_2}; relying on full-wave optimization to engineer each and every meta-atom quickly becomes unreasonable, especially for generally-inhomogeneous metasurfaces (e.g., \cite{Asadchy2016,Epstein2016_4,Epstein2017}). Very recently, several authors have revisited the problem of perfect reflection, aiming at fully-coupling a plane wave incoming from a given angle to a reflected plane wave propagating towards a desirable (non-specular) direction, based on diffraction grating principles \cite{Sounas2016, Wong2017, Memarian2017, PaniaguaDominguez2017, Radi2017, Wong2017_1}. This problem, which was recently shown to be quite challenging to solve using metasurfaces \cite{Asadchy2016,Estakhri2016_1,Epstein2016_4,DiazRubio2017,Asadchy2017}, turned out to be fully solvable with periodic structures, having only a single or a few subwavelength meta-atoms in each macro-period (whose dimensions are comparable to the wavelength). In contrast to metasurfaces that implement the same functionality, which are comprised of numerous different meta-atoms in a macro-period, these so-called metagratings only require the design of a \emph{single} polarizable particle to achieve an optimal $100\%$ conversion from incident to reflected waves; thus, they substantially overcome the aforementioned microscopic design challenge associated with metasurfaces. This complexity reduction is facilitated by the fact that metagratings aim at cancelling a finite number of spurious \emph{propagating} diffraction modes, whereas the metasurfaces implement a prescribed field transformation, which does not allow \emph{any} undesirable diffraction mode (neither propagating nor evanescent) to be excited \cite{Epstein2014_2}. Although this destructive interference mechanism by which efficient diffraction engineering can be achieved is known for many years from the field of dielectric gratings (e.g., \cite{Perry1995,Destouches2005,Ito2013}), a rigorous scheme to determine the optimal grating geometry was absent, and designs were mainly based on physical intuition and numerical optimization. In a recent paper, Ra'di \textit{et al.} \cite{Radi2017} developed a rigorous analytical methodology to design metagratings for perfect engineered reflection, based on a periodic array of identical subwavelength particles situated in free space, backed by a perfect electric conductor (PEC). Formulating the fields as a superposition of the fields scattered in the absence of the particle array and the fields generated by the array itself, they found conditions on the required array-PEC separation distance and the effective grid impedance that will guarantee that (1) the specular reflection will destructively interfere with the corresponding Floquet-Bloch (FB) harmonics radiated by the particle array; and (2) all of the incident power will be coupled to a different (prescribed) FB mode. This facilitated perfect reflection via a single-element periodic structure; once the distance between the particle grid and the PEC was determined for given angles of incidence and reflection, the physical structure of the meta-atom was achieved via a simple parametric sweep. Furthermore, it was demonstrated therein that using meta-atoms with more degrees of freedom (e.g., bianisotropic), extends the applicability of such metagratings to additional scenarios. \begin{figure*}[htb] \includegraphics[width=16cm]{physicalConfiguration.pdf}% \caption{Physical configuration of the PEC-backed electrically-polarizable beam-splitting metagratings. (a) Side view; $\Lambda$-periodic metagrating separated by $h$ from the PEC, designed to eliminate specular reflection. (b) Top view; distributed impedance per-unit-length $\tilde{Z}$ is formed by finite loads repeating every $L$ along the $x$ axis. (c) Trimetric view of a single electrically-polarizable loaded element [marked by a dashed rectangle in (b)]. trace width, separation, and thickness are given by $w$, $s$, and $t$, respectively; the load impedance is controlled by the capacitor width $W$ (denoted in red).} \label{fig:physical_configuration} \end{figure*} Recognizing the potential of these novel devices for advanced beam manipulation, we present in this paper a thorough investigation of their fundamental properties. In contrast to \cite{Radi2017}, which utilized magnetically-polarizable particles excited by transverse magnetic (TM) fields, we treat herein electrically-polarizable metagratings, excited by transverse electric (TE) fields (Fig. \ref{fig:physical_configuration}). Focusing on electrically-polarizable particles in the form of loaded conductive wires has two merits. First, such structures are more practical from a realization point of view, as they can be naturally integrated into planar devices, as was vastly demonstrated for microwave, terahertz, and optical metasurfaces (e.g., \cite{Zhao2012, Pfeiffer2014, Kuznetsov2015, Chang2017, Achouri2015_1, Epstein2016}). Second, it allows harnessing of well-established analytical models \cite{Wait1954,Tretyakov2003,Liberal2012} for formulation of efficient and insightful synthesis and analysis schemes. Indeed, we utilize these models to derive a detailed semianalytical design methodology for reflective metagratings; for simplicity, we focus on perfect wide-angle beam-splitting [Fig. \ref{fig:physical_configuration}(a)], a functionality that was found to be challenging for metasurfaces \cite{Estakhri2016,Estakhri2016_1}, and was mentioned in passing in \cite{Radi2017}. Our derivation goes one step beyond \cite{Radi2017}, deriving analytical expressions for the required individual-wire load impedances. For the capacitive loads suitable for the beam-splitting functionality, we show that this detailed formulation enables analytical determination of the physical dimensions of the required printed-capacitor copper traces, requiring only a single numerical simulation at the frequency of operation. In addition, we use the detailed analytical model to examine the metagrating performance as a function of load impedance and operating frequency; the model can readily accommodate realistic copper traces with finite conductivity, allowing us to shed light on the role of losses. Our analysis reveals that the metagrating features preferable working points, where the sensitivity to load reactance deviations is low, losses are less pronounced, and the bandwidth is relatively large. These operating conditions are directly linked to fundamental interference processes taking place in the device, as pointed out by the analytical formulation. These results yield physical insight as well as efficient and intuitive engineering tools for synthesis and analysis of future metagratings, laying the groundwork for practical realization of these devices, and extension of their range of applications. \section{Theory} \label{sec:theory} \subsection{Formulation} \label{subsec:formulation} We consider a 2D configuration ($\partial/\partial x=0$) excited by TE-polarized fields ($E_z=E_y=H_x=0$), in which a $\Lambda$-periodic array of loaded conducting wires is situated at $z=-h$ below a PEC, occupying the plane $z=0$ [Fig. \ref{fig:physical_configuration}(a)]. The half-plane $z<0$ is filled with a (passive lossless) homogeneous medium with permittivity $\epsilon$ and permeability $\mu$, defining the wavenumber $k=\omega\sqrt{\mu\epsilon}$ and the wave impedance $\eta=\sqrt{\mu/\epsilon}$ for time-harmonic fields $e^{j\omega t}$. The wires are of width $w\ll\lambda,\Lambda$ and thickness $t\ll w$, where $\lambda=2\pi/k$ is the wavelength at the operating frequency $f=\omega/\left(2\pi\right)$, and are assumed to be uniformly loaded by a distributed impedance per-unit-length of $\tilde{Z}$ [Fig. \ref{fig:physical_configuration}(b)-(c)]. In practice, this distributed impedance is implemented by lumped loads, repeating in a periodic fashion along the $x$-axis with a deep-subwavelength period $L$. As denoted, our goal is to find the array-PEC distance $h$ and the load impedance $\tilde{Z}$ that yield full and equal coupling of a normally-incident plane wave into two plane waves, reflected towards $\pm\theta_\mathrm{out}$. We start by formulating the total fields in the problem, which can be written as a superposition of the fields in the absence of the wire array, and the fields generated due to the (yet to be determined) current $I$ induced on the wires due to these "external" fields. Each of these sets of fields should comply with the boundary conditions at the PEC, namely, $\left.E_x\left(y,z\right)\right|_{z\rightarrow0^-}=0$. Consequently, the external fields are composed of a normally-incident and normally-reflected plane waves \begin{equation} E_x^{\mathrm{ext}}\left(y,z\right) = E_\mathrm{in}\left(e^{-jkz}-e^{jkz}\right), \label{equ:external_fields} \end{equation} where $E_\mathrm{in}$ is the given excitation amplitude. The fields produced by the metagrating are a sum of an infinite array of electric line sources at positions $\left(y,z\right)=\left(n\Lambda,-h\right)$, $n\in\mathbb{Z}$, and their image sources, symmetrically positioned at $\left(y,z\right)=\left(n\Lambda,h\right)$, carrying the same currents with a $\pi$ phase difference. Due to the periodic configuration and the symmetric excitation, the induced currents $I$ are identical for all the wires \cite{Tretyakov2003}, and the corresponding fields are given by \begin{equation} \begin{array}{l} E_x^{\mathrm{wire}}\left(y,z\right) = \\ \,\,\, -\dfrac{k\eta}{4}I\!\!\!\displaystyle\sum\limits_{n=-\infty}^{\infty}\!\!\left\{\!\! \begin{array}{l} H_0^{(2)} \left[k\sqrt{\left(y-n\Lambda\right)^2+\left(z+h\right)^2}\right] \\ -H_0^{(2)} \left[k\sqrt{\left(y-n\Lambda\right)^2+\left(z-h\right)^2}\right] \end{array} \!\!\right\}, \end{array} \label{equ:wire_fields} \end{equation} where $H_0^{(2)}\left(\Omega\right)$ is the zeroth-order Hankel function of the second kind. To evaluate the fields generated by the wires at ${z\neq-h}$, we utilize the Poisson formula \cite{Tretyakov2003}, stating that for a given function $f\left(l\right)$ \begin{equation} \sum\limits_{n=-\infty}^{\infty}f\left(n\Lambda\right) =\sum\limits_{m=-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\frac{dl}{\Lambda}f\left(l\right)e^{-j\frac{2\pi m}{\Lambda}l}. \label{equ:Poisson_formula} \end{equation} Using Eq. \eqref{equ:Poisson_formula} with $f\left(l\right)=H_0^{(2)}\left[k\sqrt{\left(y-l\right)^2+\left(z\pm h\right)^2}\right]$, and considering the Fourier transform of the Hankel function is given by \cite[Eqs. (5.4.33)-(5.4.35)]{FelsenMarcuvitz1973} \begin{equation} \int\limits_{-\infty}^{\infty}\!\!\!dl H_0^{(2)}\!\! \left[\!k\sqrt{\left(y-l\right)^2+\left(z\pm h\right)^2}\right]\!\!e^{-jk_tl}\!=\!2\frac{e^{-jk_ty}e^{-j\beta\left|z\pm h\right|}}{\beta}, \label{equ:Fourier_transform_Hankel} \end{equation} where $\beta=\sqrt{k^2-k_t^2}$, $\Im\left\{\beta\right\}\leq0$, Eq. \eqref{equ:wire_fields} can be written as \cite{Tretyakov2003} \begin{equation} \begin{array}{l} \vspace{0.5mm} E_x^{\mathrm{wire}}\left(y,z\right) = \\ \,\,\, -\dfrac{k\eta}{2\Lambda}I\!\!\!\displaystyle\sum\limits_{m=-\infty}^{\infty}\!\!\! e^{-j\frac{2\pi m}{\Lambda}y}\frac{e^{-j\beta_m\left|z+h\right|}-e^{j\beta_m\left(z-h\right)}}{\beta_m}, \end{array} \label{equ:wire_fields_Poisson} \end{equation} where $\beta_m=\sqrt{k^2-\left(2\pi m/\Lambda\right)^2}$, $\Im\left\{\beta_m\right\}\leq0$. We can now observe that the interaction of the external fields with the periodic wire array gives rise to a series of scattered FB harmonics, where the $m$th term of the summation in Eq. \eqref{equ:wire_fields_Poisson} corresponds to the $m$th FB mode. The total electric fields are thus given by $E_x^\mathrm{tot}\left(y,z\right)=E_x^\mathrm{ext}\left(y,z\right)+E_x^\mathrm{wire}\left(y,z\right)$, and the tangential magnetic fields can be readily derived from them via Maxwell's equations for this TE case, reading ${H_y\left(y,z\right)=-\frac{1}{jk\eta}\frac{\partial}{\partial z}E_x\left(y,z\right)}$. In the framework of our detailed analysis, we strive to tie the physical structure of the meta-atom (loaded wire) to the design requirements. To this end, we recall that the relation between the total fields at the wire position and the induced currents is given by the distributed impedance $\tilde{Z}$ via Ohm's law, ${\left.E_x^{\mathrm{tot}}\left(y,z\right)\right|_{\left(y,z\right)\rightarrow\left(0,-h\right)}=\tilde{Z}I}$ \cite{Tretyakov2003}. In order to write this expression explicitly, due to the divergence of the Hankel function at $\left(y,z\right)\rightarrow\left(0,-h\right)$, we have to refine our approximation of the current-carrying wire as a line source of infinitesimal radius, and take into account the actual wire dimensions [Fig. \ref{fig:physical_configuration}(c)]. As $t\ll w\ll\lambda$, we can use the flat wire model in \cite{Tretyakov2003}, treating the wire as a conducting cylinder of effective radius $r_\mathrm{eff}=w/4$. Consequently, using Eqs. \eqref{equ:external_fields} and \eqref{equ:wire_fields} we can write Ohm's law as \begin{equation} \begin{array}{l} \vspace{1.5mm} \tilde{Z}I = 2jE_\mathrm{in}\sin\left(kh\right) \\ \,\,\, -\dfrac{k\eta}{4}I H_0^{(2)}\left(kr_\mathrm{eff}\right)-\dfrac{k\eta}{4}I\!\!\!\!\! \displaystyle\sum\limits_{\scriptsize\begin{array}{c} n\!=\!-\infty\\ n\!\neq\! 0 \end{array}}^{\infty} \!\!\!\!\!\!H_0^{(2)}\left(k\left|n\Lambda\right|\right) \\ \,\,\, +\dfrac{k\eta}{4}I\!\!\!\displaystyle\sum\limits_{n=-\infty}^{\infty}\!\!\!\!H_0^{(2)} \left[k\sqrt{\left(n\Lambda\right)^2+\left(2h\right)^2}\right], \end{array} \label{equ:Ohms_law} \end{equation} from which the current induced by the applied fields can be evaluated, for a given $\tilde{Z}$. Alternatively, Eq. \eqref{equ:Ohms_law} can be used to assess the required $\tilde{Z}$ to obtain a certain induced current. Subsequently, we follow \cite{Tretyakov2003} to develop Eq. \eqref{equ:Ohms_law} into a more useful format, expressing the required $\tilde{Z}$ to yield a prescribed $E_\mathrm{in}/I$ ratio (to be derived in Subsections \ref{subsec:specular_reflection} and \ref{subsec:beam_splitting}). In particular, as $w\ll\lambda$, the second term in the right-hand side (RHS) can be approximated by the asymptotic expression of the Hankel function for small arguments \cite[Eq. (9.1.8)]{AbramowitzStegun1970}; the third term can be expanded using \cite[Eq. (8.522)]{GradshteinRyzhik2015}; and for the fourth term, we can apply again the Poisson formula [Eqs. \eqref{equ:Poisson_formula} and \eqref{equ:Fourier_transform_Hankel}]. These transformations lead to \begin{equation} \begin{array}{l} \vspace{0.5mm} \tilde{Z} = 2j\dfrac{E_\mathrm{in}}{I}\sin\left(kh\right) \\ \vspace{1mm} \,\,\, -\dfrac{\eta}{2\Lambda}\left(1-e^{-2jkh}\right) +j\dfrac{k\eta}{2\pi}\log\dfrac{2\pi r_\mathrm{eff}}{\Lambda}\\ \,\,\, -k\eta\displaystyle\sum\limits_{m=1}^{\infty}\left(\frac{1-e^{-2j\beta_m h}}{\Lambda\beta_m}-j\frac{1}{2\pi m}\right), \end{array} \label{equ:Ohms_law_Poisson} \end{equation} in which the infinite summation converges very well. \subsection{Eliminating specular reflection} \label{subsec:specular_reflection} As shown in \cite{Radi2017}, with the available degrees of freedom, namely, $h$ and $\tilde{Z}$, we can only eliminate a single FB mode. Thus, to successfully couple all the incident power to the FB modes propagating towards $\pm\theta_\mathrm{out}$, these have to be the only FB modes (other than the fundamental specular reflection) that are propagating. This requirement imposes two constraints on our design. First, the angles $\pm\theta_\mathrm{out}$ should correspond to the $\pm1$ propagating FB modes; following Eq. \eqref{equ:wire_fields_Poisson} this implies that \begin{equation} \frac{2\pi}{\Lambda}=k\sin\theta_\mathrm{out}\,\Rightarrow\,\Lambda=\frac{\lambda}{\sin\theta_\mathrm{out}}. \label{equ:Lambda_period} \end{equation} Second, all the other higher-order FB modes ($\left|m\right|\geq2$) should be evanescent, implying, from Eqs. \eqref{equ:wire_fields_Poisson} and \eqref{equ:Lambda_period}, that \begin{equation} 2\frac{2\pi}{\Lambda}>k\,\Rightarrow\,\theta_\mathrm{out}>30^\circ. \label{equ:theta_constraint} \end{equation} Let us apply these constraints on the field expressions, and write the total fields $E_x^{\mathrm{tot},<}$ below the metagrating ($z<-h$) using Eqs. \eqref{equ:external_fields} and \eqref{equ:wire_fields_Poisson}. These read \begin{equation} \begin{array}{l} \vspace{1mm} E_x^{\mathrm{tot},<}\left(y,z\right) = E_\mathrm{in}e^{-jkz}-E_\mathrm{in}e^{jkz} \\ \vspace{1mm} \,\,\, -j\dfrac{\eta I}{\Lambda}\sin\left(kh\right)e^{jkz} \\ \vspace{1mm} \,\,\, -j\dfrac{\eta I}{\Lambda}\frac{\sin\left(kh\cos\theta_\mathrm{out}\right)}{\cos\theta_\mathrm{out}} e^{jkz\cos\theta_\mathrm{out}}e^{-jky\sin\theta_\mathrm{out}} \\ \vspace{1mm} \,\,\, -j\dfrac{\eta I}{\Lambda}\frac{\sin\left(kh\cos\theta_\mathrm{out}\right)}{\cos\theta_\mathrm{out}} e^{jkz\cos\theta_\mathrm{out}}e^{jky\sin\theta_\mathrm{out}} \\ \,\,\, -j\dfrac{\eta I}{\Lambda}\!\!\!\displaystyle\sum\limits_{\scriptsize\begin{array}{c} m\!=\!-\infty\\ |m|\!\geq\! 2 \end{array}}^{\infty}\!\!\! \frac{k\sinh\left(\alpha_m h\right)}{\alpha_m}e^{\alpha_m z}e^{-j\frac{2\pi m}{\Lambda}y}, \end{array} \label{equ:total_fields_below} \end{equation} where we used $\beta_m\triangleq-j\alpha_m$ ($\alpha_m\geq 0$, $\forall \left|m\right|\geq2$) in the terms corresponding to the evanescent modes according to Eq. \eqref{equ:theta_constraint}. From Eq. \eqref{equ:total_fields_below} it is quite clear that our only means to eliminate the specular reflection (second term in RHS) is to form destructive interference with the fundamental FB mode of the wire-generated fields (third term in RHS) \cite{Radi2017}. Consequently, we are required to tune the physical configuration of Fig. \ref{fig:physical_configuration}(c) such that \begin{equation} \dfrac{E_\mathrm{in}}{I}=-j\dfrac{\eta}{\Lambda}\sin\left(kh\right). \label{equ:specular_elimination_condition} \end{equation} \subsection{Perfect beam splitting} \label{subsec:beam_splitting} Once we have eliminated specular reflections via Eq. \eqref{equ:specular_elimination_condition}, we should guarantee that all of the incident power indeed couples to the two plane waves propagating towards $\pm\theta_\mathrm{out}$ (i.e., the $\pm1$ FB modes). Although these are the only propagating modes that are left [Eq. \eqref{equ:total_fields_below}], the incident power could be partially absorbed by the metagrating, reducing the device performance; in this subsection, we derive the condition to avoid this undesirable absorption. In order to ensure that all the incident power is coupled to the two reflected beams, we merely need to require that the net real power crossing a certain plane $z=z_p<-h$ vanishes; this means that the real power incident upon the metagrating is reflected in its entirely. As the $\pm1$ FB modes are the only propagating modes that remain after the elimination of specular reflection, this implies that all the incident power is coupled to these modes; due to the problem symmetry, the same amount of power is coupled to each of these plane waves. The overall real power crossing the plane $z=z_p<-h$ in one period is defined as \begin{equation} P_z^{\mathrm{tot}}\left(z\right)=\frac{1}{2}\!\!\!\int\limits_{-\Lambda/2}^{\Lambda/2} \!\!\! dy \, \Re\left\{E_x\left(y,z\right)H_y^*\left(y,z\right)\right\}. \label{equ:real_power_definition} \end{equation} Due to he problem periodicity, it is sufficient to show that the real power integrated over a single period indeed vanishes to guarantee full coupling as discussed above. Subsequently, the perfect beam-splitting condition $P_z^{\mathrm{tot}}\left(z_p\right)=0$ can be written explicitly by substituting Eq. \eqref{equ:total_fields_below} (and its $z$-derivative, corresponding to the tangential magnetic fields) into Eq. \eqref{equ:real_power_definition}, integrating, and equating to zero. This yields a second condition on the metagrating parameters, namely, \begin{equation} \begin{array}{l} \vspace{1mm} \Im\left\{\dfrac{E_{\mathrm{in}}}{I}\right\}\sin\left(kh\right) + \dfrac{\eta}{2\Lambda}\sin^2\left(kh\right) = \\ \,\,\,\,\,\,\,\,\,\, -\dfrac{\eta}{\Lambda\cos\theta_\mathrm{out}}\sin^2\left(kh\cos\theta_\mathrm{out}\right) \end{array} \label{equ:full_coupling_condition} \end{equation} Note that as we consider a passive lossless medium $\{\epsilon,\mu\}\in\mathbb{R}$, the perfect beam-splitting condition is independent of the choice of $z_p$. Substituting the specular reflection elimination condition Eq. \eqref{equ:specular_elimination_condition} into Eq. \eqref{equ:full_coupling_condition}, still considering a passive lossless medium $\{k,\eta\}\in\mathbb{R}$, yields \begin{equation} \mathcal{E}=\cos\theta_\mathrm{out}\sin^2\left(kh\right)-2\sin^2\left(kh\cos\theta_\mathrm{out}\right)=0, \label{equ:h_condition} \end{equation} which is a nonlinear equation from which the required wire-PEC separation distance $h$ can be numerically/graphically evaluated, setting our first degree of freedom. Compared with the analogous Eq. (4) of \cite{Radi2017}, we can observe that the interference terms (trigonometric functions with arguments $kh$ and $kh\cos\theta_\mathrm{out}$) feature now sines instead of cosines (due to difference between image theory for TE and TM polarized sources), and the perfactors correspond to the wave impedances of the various propagating modes (note that herein we have three distinct propagating FB modes). After fixing $h$ following Eq. \eqref{equ:h_condition}, Eqs. \eqref{equ:specular_elimination_condition} and \eqref{equ:full_coupling_condition} can be substituted into Eq. \eqref{equ:Ohms_law_Poisson} to obtain an explicit expression for the distributed impedance $\tilde{Z}$, reading \begin{equation} \begin{array}{l} \vspace{0.5mm} \tilde{Z} = -j\dfrac{\eta}{\Lambda}\left[\dfrac{\sin\left(2kh\right)}{2}+\dfrac{\sin\left(2kh\cos\theta_\mathrm{out}\right)}{\cos\theta_\mathrm{out}}\right] \\ \vspace{1mm} \,\,\, +j\dfrac{k\eta}{2\pi}\left(1+\log\dfrac{2\pi r_\mathrm{eff}}{\Lambda}\right)\\ \,\,\, -j\dfrac{\eta}{\Lambda}\displaystyle\sum\limits_{m=2}^{\infty}\left[\frac{k\left(1-e^{-2\alpha_m h}\right)}{\alpha_m}-\frac{k\Lambda}{2\pi m}\right], \end{array} \label{equ:Z_tilde_condition} \end{equation} setting our second degree of freedom. The benefits of providing direct access to the individual wire load in our synthesis scheme are apparent already from a brief look at Eq. \eqref{equ:Z_tilde_condition}. It can be readily verified that the RHS is purely imaginary; this indicates that in order to have full coupling of the incident plane wave into the two symmetrical diffraction modes, the wire should be loaded by a purely reactive impedance. This is consistent with our previous observation that only losses could prevent perfect beam-splitting once the specular reflection elimination condition of Eq. \eqref{equ:specular_elimination_condition} is satisfied, and thus should ideally be avoided. \section{Results and Discussion} \label{sec:results} \subsection{Synthesis} \label{subsec:synthesis} We first use the developed formalism to demonstrate an efficient way for synthesizing perfect metagrating beam splitters. To this end, for a given desirable $\theta_\mathrm{out}$, we find (via a simple numerical MATLAB code) the separation distance $h$ that minimizes $\mathcal{E}$ of Eq. \eqref{equ:h_condition}. The optimal wire-PEC distance is presented in Fig. \ref{fig:hCondition} as a function of the splitting angle, where we have chosen the smallest $h$ satisfying Eq. \eqref{equ:h_condition} for each $\theta_\mathrm{out}$. This a universal curve, which is valid for all operating frequencies (note that $h$ is expressed in wavelength units). Therefore, we may conclude that it is feasible to implement all the possible beam splitters with metagrating devices whose thickness is less than the operating wavelength. \begin{figure}[tbh] \includegraphics[width=7cm]{hCondition.pdf}% \caption{Required wire-PEC separation as a function of the splitting angle, obtained from Eq. \eqref{equ:h_condition}.} \label{fig:hCondition} \end{figure} Subsequently, to evaluate the required distributed impedance (the other degree of freedom we need to set), we substitute these optimal $h$ values (Fig. \ref{fig:hCondition}) into Eq. \eqref{equ:Z_tilde_condition}, considering the suitable metagrating period $\Lambda$ for each splitting angle [Eq. \eqref{equ:Lambda_period}]. For a fixed conductor width $w$ [Fig. \ref{fig:physical_configuration}(c)], typically limited by manufacturing constraints, this design curve \emph{does} depend on the operation frequency, due to the expression in the second row of Eq. \eqref{equ:Z_tilde_condition} [recall that $r_\mathrm{eff}=w/4$]. Thus, to proceed with our device synthesis, we need to fix $w$, and consider specific operating frequencies. Throughout this paper, we will consider the printed capacitor geometry presented in Fig. \ref{fig:physical_configuration}(c) for implementing the distributed load (the reasons for choosing a distributed capacitance will become apparent shortly). The trace width and trace separation are fixed to ${w=s=3\mathrm{mil}=76.2\mathrm{\mu m}}$ [Fig. \ref{fig:physical_configuration}(c)], following typical fabrication tolerances \cite{Epstein2016,Chen2017}. This structure repeats itself periodically every $L=\lambda/10$ along the $x$-axis, forming an approximately-homogeneous distributed capacitance. The equivalent impedance per-unit-length $\tilde{Z}$ of this formation can be thus tuned by modifying the capacitor width $W$, which is approximately linearly-proportional to the capacitance \cite{Lee2003}. \begin{figure}[tbh] \includegraphics[width=8cm]{Z_W_10GHz.pdf}% \caption{Load design specifications as a function of the splitting angle, for metagratings operating at $f=10\mathrm{GHz}$. (a) Required distributed reactance $\tilde{X}=\Im\{\tilde{Z}\}$, evaluated from Eq. \eqref{equ:Z_tilde_condition}. (b) Corresponding capacitor width $W$ [Fig. \ref{fig:physical_configuration}(c)], comparing predictions via Eq. \eqref{equ:capacitor_width} (blue solid line) with actual optimal values obtained from full-wave simulations (red circles).} \label{fig:load_design_10GHz} \end{figure} Using this geometry, we plot in Fig. \ref{fig:load_design_10GHz}(a) the required distributed reactance $\tilde{X}\triangleq\Im\{\tilde{Z}\}$ as a function of the splitting angle for the operating frequency $f=10\mathrm{GHz}$ ($\lambda\approx30\mathrm{mm}$), obtained from Eq. \eqref{equ:Z_tilde_condition} and the results of Fig. \ref{fig:hCondition}. As can be observed, the required reactance is negative for all considered $\theta_\mathrm{out}$; thus, a capacitive loading is required, given by $C=-1/(2\pi f L \tilde{X})$, which explains the chosen meta-atom geometry [Fig. \ref{fig:physical_configuration}(c)]. \begin{figure*}[htb] \includegraphics[width=16cm]{fieldDistributions10GHz.pdf}% \caption{Electric field distributions $\left|\Re\left\{E_x\left(y,z\right)\right\}\right|$ for beam-splitting metagratings operating at $f=10\mathrm{GHz}$, excited from below with a normally-incident plane wave. Analytical predictions following Eqs. \eqref{equ:external_fields} and \eqref{equ:wire_fields_Poisson} [(a),(c),(e),(g),(i)] are compared to results of full-wave simulations of the realistic loaded wires of Fig. \ref{fig:physical_configuration}(c) with the optimal values of Fig. \ref{fig:load_design_10GHz}(b) [(b),(d),(f),(h),(j)]. A single period $\Lambda=\lambda/\sin\theta_\mathrm{out}$ is shown, for metagratings designed following Eqs. \eqref{equ:h_condition} and \eqref{equ:Z_tilde_condition} for various splitting angles: (a),(b) $\theta_\mathrm{out}=40^\circ$; (c),(d) $\theta_\mathrm{out}=50^\circ$; (e),(f) $\theta_\mathrm{out}=60.5^\circ$; (g),(h) $\theta_\mathrm{out}=70^\circ$; and (i),(j) $\theta_\mathrm{out}=80^\circ$. Dashed horizontal white lines denote the plane $z=-h$ of Eq. \eqref{equ:h_condition}, and a dotted white circle denotes a $0.1\lambda$-diameter region around the metagrating element, within which analytical predictions for uniformly-loaded singular wires are expected to deviate from full-wave simulations of realistic copper traces.} \label{fig:fields_10GHz} \end{figure*} The last step to obtain a detailed physical realization involves assessing the required capacitor width $W$ that implements the prescribed quasi-static capacitance $C$. To this end, we can use certain analytical approximations for the capacitance of coplanar strips; however, as these do not usually consider residual capacitance formed due to the vertical lines connecting the printed capacitors (i.e., the wire itself), a frequency-dependent correction factor $K_\mathrm{corr}$ should be incorporated into these formulas. Fortunately, as the capacitance is predominantly proportional to the capacitor width $W$, once this correction factor is assessed via full-wave simulations for one working point, it can be used to generate other designs, as long as the operation frequency remains the same. Specifically, we follow \cite[Eq. (7.64)]{Gupta1996}, which for our case of $w=s$ yields the following approximation for the required capacitor width \begin{equation} W\approx 2.85K_\mathrm{corr}C \,\left[\dfrac{\mathrm{mil}}{\mathrm{fF}}\right] \label{equ:capacitor_width} \end{equation} We use a commercial finite-elements solver, ANSYS HFSS, to compare the analytical predictions (Section \ref{sec:theory}) with full-wave simulations of the metagrating realization. For a given $\theta_\mathrm{out}$, the simulation domain consists of a PEC at $z=0$ and a loaded-wire meta-atom [Fig. \ref{fig:physical_configuration}(c)] at the corresponding $z=-h$ (Fig. \ref{fig:hCondition}), placed inside a 2D Master-Slave periodic boundary conditions [$\Lambda$-periodic along the $y$-axis and $L$-periodic along the $x$-axis, \textit{cf.} Fig. \ref{fig:physical_configuration}(a),(b)], excited by a Floquet port at $z=-2\lambda$. The standard value of $\sigma=58\times10^6 \mathrm{S/m}$ was used to simulate realistic copper conductivity, further enhancing the fidelity of the simulation results. First, to evaluate $K_\mathrm{corr}$ at $f=10\mathrm{GHz}$, we consider the configuration corresponding to $\theta_\mathrm{out}=80^\circ$ (chosen arbitrarily), and sweep the capacitor width around the value predicted by Eq. \eqref{equ:capacitor_width} without correction ($K_\mathrm{corr}=1$) to find the actual optimal $W$, which yields the highest power coupling to the $\pm1$ FB modes. The ratio between the uncorrected and the optimal $W$ forms the required correction factor, which is found to be ${K_\mathrm{corr}@10\mathrm{GHz}=0.83}$. Next, we use this value with Eq. \eqref{equ:capacitor_width} and the prescribed distributed impedance Fig. \ref{fig:load_design_10GHz}(a) to predict the required capacitor width for all other $\theta_\mathrm{out}$; Figure \ref{fig:load_design_10GHz}(b) presents the required $W$ values (blue solid line) obtained in this manner. Subsequently, for representative split angles in the range $\theta_\mathrm{out}=35^\circ$ to $\theta_\mathrm{out}=89^\circ$, we sweep $W$ in full-wave simulations around the predicted value to find the actual optimal capacitor width; these optima are denoted using red circles in Fig. \ref{fig:load_design_10GHz}(b). As can be observed, excellent agreement between the semianalytical predictions [Eq. \eqref{equ:capacitor_width}] and the optimal values is obtained. This points out another advantage of the detailed analytical model used in this paper, namely, its ability to provide a very good prediction of the optimal physical dimensions of the meta-atom geometry. \begin{table*}[htb] \centering \begin{threeparttable}[b] \renewcommand{\arraystretch}{1.3} \caption{Design specifications and simulated performance of beam-splitting metagratings operating at $f=10\mathrm{GHz}$ (corresponding to Figs. \ref{fig:load_design_10GHz} and \ref{fig:fields_10GHz}).} \label{tab:metagrating_performance_10GHz} \centering \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \hline $\theta_\mathrm{out}$ & $35^\circ$ & $40^\circ$ & $45^\circ$ & $50^\circ$ & $55^\circ$ & $60.5^\circ$ & $65^\circ$ & $70^\circ$ & $80^\circ$ & $89^\circ$ \\ \hline \hline \\[-1.3em] \begin{tabular}{l} $\Lambda [\lambda]$ \end{tabular} & $1.743$ & $1.556$ & $1.414$ & $1.305$ & $1.221$ & $1.149$ & $1.103$ & $1.064$ & $1.016$ & $1.0002$ \\ \hline \begin{tabular}{l} $h [\lambda]$ \end{tabular} & $0.562$ & $0.586$ & $0.616$ & $0.656$ & $0.718$ & $0.039$ & $0.123$ & $0.176$ & $0.272$ & $0.418$ \\ \hline \begin{tabular}{l} $W [\mathrm{mil}]$ \end{tabular} & $179.6$ & $193.5$ & $207.0$ & $225.3$ & $252.0$ & $201.8$ & $158.1$ & $144.0$ & $129.0$ & $105.0$ \\ \hline \begin{tabular}{l} Splitting efficiency \end{tabular} & $2\times40.5\%$ & $2\times44.9\%$ & $2\times47.0\%$ & $2\times48.1\%$ & $2\times48.6\%$ & $2\times35.5\%$ & $2\times48.0\%$ & $2\times48.9\%$ & $2\times49.1\%$ & $2\times46.7\%$ \\ \hline \begin{tabular}{l} Specular reflection \end{tabular} & $1.4\%$ & $0.1\%$ & $0.2\%$ & $0.3\%$ & $0.3\%$ & $2.5\%$ & $0.2\%$ & $0.0\%$ & $0.1\%$ & $0.2\%$ \\ \hline \begin{tabular}{l} Losses \end{tabular} & $17.6\%$ & $10.1\%$ & $5.8\%$ & $3.5\%$ & $2.5\%$ & $26.5\%$ & $3.8\%$ & $2.2\%$ & $1.7\%$ & $6.4\%$ \\ \hline \hline \end{tabular} \end{threeparttable} \end{table*} Figure \ref{fig:fields_10GHz} presents the field distributions as obtained from the analytical predictions [Eqs. \eqref{equ:external_fields}, \eqref{equ:wire_fields_Poisson}, \eqref{equ:specular_elimination_condition}, and \eqref{equ:h_condition}] and from full-wave simulations with the realistic metagrating elements of Fig. \ref{fig:physical_configuration}(c) and the optimal capacitor widths of Fig. \ref{fig:load_design_10GHz}(b), for several representative split angles. These plots reflect an excellent agreement between the analytical theory and the simulated actual devices, except for small regions around the meta-atoms (denoted in dotted white circles of diameter $0.1\lambda$), where the uniformly-loaded singular wire model used in the analytical calculations fails to account for the finite-size copper trace geometry used in simulations. A closer look reveals that although the predicted and simulated field interference patterns almost-perfectly match, the absolute field amplitudes in the simulated results are lower than the predicted ones (note that the same colorbar scale is used). While for most considered designs these deviations are rather minor, for certain split angles, e.g. for $\theta_\mathrm{out}=60.5^\circ$ [Fig. \ref{fig:fields_10GHz}(e),(f)], the differences are quite significant. This reduction in field amplitude is related to conductor losses, which are taken into account in the simulated realistic design, but were so-far ignored in the analytical model. Indeed, as can be observed in Table \ref{tab:metagrating_performance_10GHz}, summarizing the design specifications and simulated performance parameters for metagrating beam-splitters with various split angles (including those presented in Fig. \ref{fig:fields_10GHz}), certain values of $\theta_\mathrm{out}$ are more prone to losses than others. While for most working points a high splitting efficiency is obtained, with more than $2\times45\%$ of the incident power coupled symmetrically to the $\pm1$ FB modes, losses increase when $\theta_\mathrm{out}\rightarrow30^{\circ}$, $\theta_\mathrm{out}\rightarrow60^{\circ}$, and $\theta_\mathrm{out}\rightarrow90^{\circ}$. Interestingly, the losses do not increase monotonically with increasing split angle, which implies that the performance reduction in metagratings is not related to impedance mismatch as is the case of Huygens' metasurfaces \cite{Epstein2014, Epstein2014_2, Wong2016, Epstein2016_3, Epstein2016_4, Asadchy2016}, but is rather driven by a different mechanism, yet to be investigated. Overall, Table \ref{tab:metagrating_performance_10GHz} verifies that the simple single-element periodic metagratings can indeed reach very high splitting efficiencies even for extreme split angles, limited only by losses (note that specular reflection is practically negligible for all cases). \begin{figure}[tbh] \includegraphics[width=8cm]{Z_W_20GHz.pdf}% \caption{Load design specifications as a function of the splitting angle, for metagratings operating at $f=20\mathrm{GHz}$. (a) Required distributed reactance $\tilde{X}=\Im\{\tilde{Z}\}$, evaluated from Eq. \eqref{equ:Z_tilde_condition}. (b) Corresponding capacitor width $W$ [Fig. \ref{fig:physical_configuration}(c)], comparing predictions via Eq. \eqref{equ:capacitor_width} (blue solid line) with actual optimal values obtained from full-wave simulations (red circles).} \label{fig:load_design_20GHz} \end{figure} \begin{table*}[htb] \centering \begin{threeparttable}[b] \renewcommand{\arraystretch}{1.3} \caption{Design specifications and simulated performance of beam-splitting metagratings operating at $f=20\mathrm{GHz}$ (corresponding to Fig. \ref{fig:load_design_20GHz}).} \label{tab:metagrating_performance_20GHz} \centering \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \hline $\theta_\mathrm{out}$ & $35^\circ$ & $40^\circ$ & $45^\circ$ & $50^\circ$ & $55^\circ$ & $60.5^\circ$ & $65^\circ$ & $70^\circ$ & $80^\circ$ & $89^\circ$ \\ \hline \hline \\[-1.3em] \begin{tabular}{l} $\Lambda [\lambda]$ \end{tabular} & $1.743$ & $1.556$ & $1.414$ & $1.305$ & $1.221$ & $1.149$ & $1.103$ & $1.064$ & $1.016$ & $1.0002$ \\ \hline \begin{tabular}{l} $h [\lambda]$ \end{tabular} & $0.562$ & $0.586$ & $0.616$ & $0.656$ & $0.718$ & $0.039$ & $0.123$ & $0.176$ & $0.272$ & $0.418$ \\ \hline \begin{tabular}{l} $W [\mathrm{mil}]$ \end{tabular} & $109.1$ & $117.8$ & $126.0$ & $136.5$ & $153.0$ & $123.0$ & $93.8$ & $85.5$ & $76.5$ & $61.5$ \\ \hline \begin{tabular}{l} Splitting efficiency \end{tabular} & $2\times42.3\%$ & $2\times45.8\%$ & $2\times47.6\%$ & $2\times48.6\%$ & $2\times49.0\%$ & $2\times37.8\%$ & $2\times48.5\%$ & $2\times49.1\%$ & $2\times49.3\%$ & $2\times47.4\%$ \\ \hline \begin{tabular}{l} Specular reflection \end{tabular} & $0.8\%$ & $0.2\%$ & $0.1\%$ & $0.0\%$ & $0.0\%$ & $1.7\%$ & $0.0\%$ & $0.0\%$ & $0.1\%$ & $0.2\%$ \\ \hline \begin{tabular}{l} Losses \end{tabular} & $14.6\%$ & $8.2\%$ & $4.7\%$ & $2.8\%$ & $2.0\%$ & $22.7\%$ & $3.0\%$ & $1.8\%$ & $1.3\%$ & $5.0\%$ \\ \hline \hline \end{tabular} \end{threeparttable} \end{table*} To further demonstrate the versatility of our synthesis scheme and analytical model, as well as to verify the observations made after Table \ref{tab:metagrating_performance_10GHz}, we apply the prescribed methodology to design beam splitters at another frequency, $f=20\mathrm{GHz}$. Based on the required wire-PEC separation distances of Fig. \ref{fig:hCondition}, which, as denoted, are frequency-invariant, we invoke Eqs. \eqref{equ:Z_tilde_condition} and \eqref{equ:capacitor_width} once more to obtain the physical dimensions of the required meta-atoms [Fig. \ref{fig:physical_configuration}(c)]. The results are given in Fig. \ref{fig:load_design_20GHz}, where we used the same procedure as before to evaluate the correction factor to be used in Eq. \eqref{equ:capacitor_width}. It was found that for $f=20\mathrm{GHz}$, this value is $K_\mathrm{corr}=0.89$, using which the predictions for the optimal $W$ [blue solid line in Fig. \ref{fig:load_design_20GHz}(b)] were obtained. As can be seen in Fig. \ref{fig:load_design_20GHz}(b), the simple relation of Eq. \eqref{equ:capacitor_width} can be still used to get good predictions for the required capacitor width at $f=20\mathrm{GHz}$. Although some of the actual optimal dimensions (red circles) deviate slightly more from the prediction, compared to the designs operating at $f=10\mathrm{GHz}$ [Fig. \ref{fig:load_design_10GHz}(b)], the deviation at these points is not very large ($\sim10\%$). Thus, the analytical relations yield a very good starting-point value, which can be readily tuned to the optimum via a short parameter sweep. Table \ref{tab:metagrating_performance_20GHz} summarizes the design specifications and simulated scattering performance of metagrating beam-splitters operating at $f=20\mathrm{GHz}$, corresponding to the optimal actual design points presented in Fig. \ref{fig:load_design_20GHz}(b). The field distributions are practically identical to the ones presented in Fig. \ref{fig:fields_10GHz} (not shown), with some minor differences in simulated results, stemming from different effective losses at the two frequencies. Indeed, Table \ref{tab:metagrating_performance_20GHz} reverifies that highly-effective suppression of specular reflection can be obtained via the proposed structure, corresponding to a near-unity splitting efficiency, limited only by conductor losses. Two interesting observations can be made upon comparison with the analogous designs at $f=10\mathrm{GHz}$, characterized in Fig. \ref{fig:fields_10GHz} and Table \ref{tab:metagrating_performance_10GHz}. First, losses at $f=20\mathrm{GHz}$ are smaller by $\sim20\%$ compared to the ones recorded for metagratings operating at $f=10\mathrm{GHz}$, for each of the considered split angles. Second, similar to Table \ref{tab:metagrating_performance_10GHz}, the losses are more pronounced when the split angle approaches certain working points, namely, when $\theta_\mathrm{out}\rightarrow30^{\circ}$, $\theta_\mathrm{out}\rightarrow60^{\circ}$, and $\theta_\mathrm{out}\rightarrow90^{\circ}$. Fortunately, the detailed analytical model presented in Section \ref{sec:theory} is highly suitable for an in-depth analysis of this intriguing loss dependency, as shall be discussed in the following subsection. \subsection{Analysis} \label{subsec:analysis} Our aim in this section is to analyze the performance of the beam-splitting metagratings synthesized in Section \ref{subsec:synthesis}, when possible realistic deviations from the ideal design occur. More specifically, we would like to examine the dependency of the coupling efficiencies and Ohmic absorption in potential losses and load reactance inaccuracies, and probe the frequency response of these devices. As our detailed analytical model (Section \ref{sec:theory}) directly links the design parameters to the device performance, we utilize it to explore these relations. We begin by formally defining the various performance parameters to be investigated: the splitting efficiency $\eta_\mathrm{split}$ is the fraction of incident power coupled to the $\pm 1$ modes (combined); the specular reflection efficiency $\eta_\mathrm{spec}$ is the fraction coupled to specular reflection; and the losses $\eta_\mathrm{loss}$ are the fraction absorbed in the conducting wires. Decomposing the real power crossing a certain plane $z<-h$ [Eq. \eqref{equ:real_power_definition}] into the corresponding modes, identified via their spatial dependency [Eq. \eqref{equ:total_fields_below}], we can write \begin{equation} \begin{array}{l} \vspace{1mm} \eta_\mathrm{split}=2\times\dfrac{1}{\cos\theta_\mathrm{out}}\left[\dfrac{\eta\sin\left(kh\cos\theta_\mathrm{out}\right)}{\Lambda}\right]^2 \left|\dfrac{I}{E_\mathrm{in}}\right|^2 \\ \vspace{1mm} \eta_\mathrm{spec}=\left|1+j\dfrac{\eta\sin\left(kh\right)}{\Lambda}\dfrac{I}{E_\mathrm{in}}\right|^2 \\ \eta_\mathrm{loss}=1-\eta_\mathrm{split}-\eta_\mathrm{spec}, \end{array} \label{equ:efficiency_definitions} \end{equation} where the dependency in the load impedance, not necessarily coinciding with the ideal value, enters via the fraction $I/E_\mathrm{in}$ and Ohm's law [Eq. \eqref{equ:Ohms_law_Poisson}]. Let us thus consider a general distributed load impedance $\tilde{Z}'$, not necessarily the purely-reactive optimal one $\tilde{Z}$, derived in Eq. \eqref{equ:Z_tilde_condition}. Thus, we can write any given load impedance as $\tilde{Z}'=\tilde{Z}+\delta\tilde{R}+j\delta\tilde{X}$, where $\delta\tilde{R}\in\mathbb{R}$ corresponds to the distributed load (conductor) resistance, responsible for losses in the system, and $\delta\tilde{X}\in\mathbb{R}$ is the deviation from the optimal distributed reactance defined by Eq. \eqref{equ:Z_tilde_condition} (e.g., due to manufacturing inaccuracies or a polychromatic excitation). Recalling that for the devices under consideration, the wire-PEC separation $h$ satisfies Eq. \eqref{equ:h_condition}, Eq. \eqref{equ:Ohms_law_Poisson} can be inverted to yield $I/E_\mathrm{in}$ for a given (arbitrary) distributed load impedance $\tilde{Z}'$, reading \begin{equation} \dfrac{I}{E_\mathrm{in}}=\dfrac{2j\sin\left(kh\right)}{\tilde{R}_g+\delta\tilde{R}+j\delta\tilde{X}}, \label{equ:I_E_fraction} \end{equation} where the effective grid resistance $\tilde{R}_g$ is defined as \begin{equation} \tilde{R}_g=\dfrac{2\eta\sin^2\left(kh\right)}{\Lambda}=\dfrac{2\eta}{\lambda}\sin\theta_\mathrm{out}\sin^2\left(kh\right), \label{equ:effective_grid_resistance} \end{equation} corresponding to the ratio between the external fields at the wire position in the absence of the wire array [Eq. \eqref{equ:external_fields}] and the current induced on the wires [Eq. \eqref{equ:specular_elimination_condition}]. Using Eq. \eqref{equ:I_E_fraction}, the coupling efficiencies of Eq. \eqref{equ:efficiency_definitions} can be explicitly written as a function of the given distributed load impedance $\tilde{Z}'$, namely, \begin{equation} \begin{array}{l} \vspace{1mm} \eta_\mathrm{split}=\dfrac{1}{\left(1+\frac{\delta\tilde{R}}{\tilde{R}_g}\right)^2+\left(\frac{\delta\tilde{X}}{\tilde{R}_g}\right)^2} \\ \vspace{1mm} \eta_\mathrm{spec}=\dfrac{\left(\frac{\delta\tilde{R}}{\tilde{R}_g}\right)^2+\left(\frac{\delta\tilde{X}}{\tilde{R}_g}\right)^2}{\left(1+\frac{\delta\tilde{R}}{\tilde{R}_g}\right)^2+\left(\frac{\delta\tilde{X}}{\tilde{R}_g}\right)^2} \\ \eta_\mathrm{loss}=2\dfrac{\frac{\delta\tilde{R}}{\tilde{R}_g}}{\left(1+\frac{\delta\tilde{R}}{\tilde{R}_g}\right)^2+\left(\frac{\delta\tilde{X}}{\tilde{R}_g}\right)^2}. \end{array} \label{equ:efficiency_explicit} \end{equation} It can be easily verified that at the ideal optimal design point, i.e. $\delta\tilde{R}=\delta\tilde{X}=0$, the coupling efficiencies are $\eta_\mathrm{split}=1$ and $\eta_\mathrm{spec}=\eta_\mathrm{loss}=0$, in consistency with the derivation in Section \ref{sec:theory}. \subsubsection{Conductor loss} \label{subsubsec:losses} To examine the effect of conductor losses on the metagrating performance, we assume that the load reactance is tuned to the optimal value ($\delta\tilde{X}=0$), and investigate the coupling efficiencies of Eq. \eqref{equ:efficiency_explicit} as a function of the load distributed resistance $\delta\tilde{R}$. It can be easily observed that the splitting efficiency gets its maximum for the lossless case $\delta\tilde{R}/\tilde{R}_g=0$, and monotonically decreases with increasing losses $\delta\tilde{R}/\tilde{R}_g>0$. For small losses, $\delta\tilde{R}/\tilde{R}_g\ll2$, this decrease is mainly due to the increase in absorption. Thus, the device performance deteriorates to $90\%$ of its maximal splitting efficiency approximately when $10\%$ of the incident power is lost to absorption; quantitatively, this happens when \begin{equation} \eta_\mathrm{loss}=10\%\Rightarrow\delta\tilde{R}_{90\%}=0.056\tilde{R}_g. \label{equ:10_percent_loss_point} \end{equation} This is a very important result: it indicates that for small values of the effective grid resistance $\tilde{R}_g$, even a very small distributed wire resistance $\delta\tilde{R}$ can result in a significant amount of losses. From another perspective, for given (constant) conductor losses, the overall absorption increases \emph{inversely} proportional to $\tilde{R}_g$. In fact, for such small wire resistance $\delta\tilde{R}/\tilde{R}_g\ll1$, $\eta_\mathrm{loss}$ of Eq. \eqref{equ:efficiency_explicit} can be approximated by \begin{equation} \eta_\mathrm{loss}\approx2\frac{\delta\tilde{R}}{\tilde{R}_g}. \label{equ:absorption_approximated} \end{equation} Thus, as revealed by Eq. \eqref{equ:effective_grid_resistance}, the working points in which the losses would be most pronounced are the ones where the product $\sin\theta_\mathrm{out}\sin^2\left(kh\right)$ is minimal, i.e. when ${h\rightarrow\nu\lambda/2},\,\nu\in\mathbb{Z}$. Therefore, considering the wire-PEC separation dictated by Fig. \ref{fig:hCondition}, we should expect increased losses at $\theta_\mathrm{out}\rightarrow60^\circ$, where $\sin\left(kh\right)$ exactly vanishes, and around $\theta_\mathrm{out}\rightarrow30^\circ$ and $\theta_\mathrm{out}\rightarrow90^\circ$, where $\sin\left(kh\right)$ approaches zero. Indeed, this is consistent with our former observations, \textit{cf.} Tables \ref{tab:metagrating_performance_10GHz} and \ref{tab:metagrating_performance_20GHz}. \begin{figure}[tbh] \includegraphics[width=8cm]{grid_resistance.pdf}% \caption{Effective grid resistance as a function of the metagrating configuration corresponding to various output angles $\theta_\mathrm{out}$, following Eq. \eqref{equ:effective_grid_resistance} with $h$ of Eq. \eqref{equ:h_condition} and Fig. \ref{fig:hCondition}.} \label{fig:grid_resistance} \end{figure} The extent of losses, however, is not identical for all of these design points; this is due to the fact that the exact value of $\tilde{R}_g$ around its minima also depends on $\sin\theta_\mathrm{out}$, and not only on the roots of $\sin\left(kh\right)$ [Eq. \eqref{equ:effective_grid_resistance}]. This dependency is not negligible, as can be seen from Fig. \ref{fig:grid_resistance}, presenting $\tilde{R}_g$ as a function of the design parameters corresponding to $\theta_\mathrm{out}$. For a given value of $\delta\tilde{R}$, this plot predicts, for instance, that the losses approaching $\theta_\mathrm{out}=30^\circ$ will be comparable with the ones when approaching $\theta_\mathrm{out}=60^\circ$, but significantly larger than the losses very close to $\theta_\mathrm{out}=90^\circ$. On the other hand, Fig. \ref{fig:grid_resistance} also points out \emph{the best} working points, where the devices are the least sensitive to parasitic losses; these are indicated by the maxima of $\tilde{R}_g$, occurring around $\theta_\mathrm{out}\approx57^\circ$ and $\theta_\mathrm{out}\approx78^\circ$. These observations, which are frequency invariant, are in consistency with the simulated results presented in Tables \ref{tab:metagrating_performance_10GHz} and \ref{tab:metagrating_performance_20GHz}. It is not a mere coincidence that losses in these structures are inversely proportional to $\sin\theta_\mathrm{out}\sin^2\left(kh\right)$, for a given $\delta\tilde{R}$ [Eq. \eqref{equ:absorption_approximated}]; in fact, this trend stems from a fundamental physical process taking place in these metagrating configurations. Due to interference between the current-carrying wires and their images [Eq. \eqref{equ:wire_fields_Poisson}], induced by the PEC at $z=0$, the field amplitude of the fundamental FB mode follows $\left.E_x^{\mathrm{wire}}\right|_{\mathrm{fund}}=-j\left(\eta/\lambda\right)I\sin\theta_\mathrm{out}\sin\left(kh\right)$ [Eq. \eqref{equ:total_fields_below}]. As we recall from Section \ref{subsec:specular_reflection}, this amplitude is required to meet a certain level, $E_\mathrm{in}$, in order to completely eliminate specular reflections [Eq. \eqref{equ:specular_elimination_condition}]. When $\sin\left(kh\right)\rightarrow0$, the phase accumulated along the distance $2kh$ is a multiple of $2\pi$; due to the $\pi$ phase shift introduced by the PEC reflection, the source and image fields tend to cancel each other at $z=-h$ [Eq. \eqref{equ:total_fields_below}]. Thus, in order to compensate this destructive interference, the design scheme tunes the metagrating configuration as to induce very large currents on the wires, to still be able to generate the fields required to eliminate specular reflection. Hence, even the slightest amount of conductor losses would result in a significant power dissipation at these working point, due to the high currents involved. On the other hand, at operating conditions for which constructive interference takes place at $z=-h$, less currents will be required, and the device would be less susceptible to losses. Formally, we can evaluate the fraction of absorbed power as the ratio between the power dissipated per period due to induced currents flowing through resistive load and the incident power density, reading \begin{equation} \eta_\mathrm{loss}=\dfrac{\frac{1}{2}\frac{\left|I\right|^2\delta\tilde{R}}{\Lambda}}{\frac{1}{2}\frac{\left|E_\mathrm{in}\right|^2}{\eta}}=\dfrac{\delta\tilde{R}}{\frac{\eta}{\lambda}\sin\theta_\mathrm{out}\sin^2\left(kh\right)}=2\frac{\delta\tilde{R}}{\tilde{R}_g}, \label{equ:absorption_from_currents} \end{equation} exactly as we estimated in Eq. \eqref{equ:absorption_approximated}. Indeed, the high currents developing on the wires at the points of destructive image-source interference, i.e. when the denominator is vanishing, are responsible to the observed prominent losses. Note that we have used the nominal ratio $\left|I/E_\mathrm{in}\right|$ given by Eq. \eqref{equ:specular_elimination_condition} to assess $\eta_\mathrm{loss}$ herein. For this reason, Eqs. \eqref{equ:absorption_approximated} and \eqref{equ:absorption_from_currents} are valid only for small losses $\delta\tilde{R}/\tilde{R}_g\ll1$; for more significant conductor losses, the induced current will deviate from Eq. \eqref{equ:specular_elimination_condition}, and the exact expressions Eq. \eqref{equ:efficiency_explicit} should be used. Before concluding this subsection, we demonstrate how the analytical relation between $\eta_\mathrm{loss}$ and $\delta\tilde{R}$ can be harnessed to assess the distributed load resistance of the actual design. To this end, we plot in Fig. \ref{fig:loss_evaluation} the predicted absorption as a function of the distributed conductor loss $\delta\tilde{R}$, for the various metagrating beam splitters considered in Section \ref{subsec:synthesis}, calculated via Eq. \eqref{equ:efficiency_explicit}. For each considered split angle $\theta_\mathrm{out}$, corresponding to a different metagrating configuration (different $\tilde{R}_g$), we have denoted by circles the losses $\eta_\mathrm{loss}$ recorded in full-wave simulations: in Fig. \ref{fig:loss_evaluation}(a) for the $f=10\mathrm{GHz}$ metagratings, with the values documented in Table \ref{tab:metagrating_performance_10GHz}, and in Fig. \ref{fig:loss_evaluation}(b) for the $f=20\mathrm{GHz}$ metagratings, as presented in Table \ref{tab:metagrating_performance_20GHz}. \begin{figure}[tbh] \includegraphics[width=8cm]{lossEvaluation.pdf}% \caption{Absorbed power fraction $\eta_\mathrm{loss}$ as a function of distributed conductor resistance $\delta\tilde{R}$, calculated from Eq. \eqref{equ:efficiency_explicit} for different metagrating designs, corresponding to split angles of $\theta_\mathrm{out}=35^\circ$ (blue solid line), $\theta_\mathrm{out}=40^\circ$ (green solid line), $\theta_\mathrm{out}=45^\circ$ (red solid line), $\theta_\mathrm{out}=50^\circ$ (black solid line), $\theta_\mathrm{out}=55^\circ$ (magenta solid line), $\theta_\mathrm{out}=60.5^\circ$ (blue dashed line), $\theta_\mathrm{out}=65^\circ$ (green dashed line), $\theta_\mathrm{out}=70^\circ$ (red dashed line), $\theta_\mathrm{out}=80^\circ$ (black dashed line), $\theta_\mathrm{out}=89^\circ$ (magenta dashed line). Circles denote actual losses recorded in full-wave simulations of the various designs at (a) $f=10\mathrm{GHz}$ [Table \ref{tab:metagrating_performance_10GHz}] and (b) $f=20\mathrm{GHz}$ [Table \ref{tab:metagrating_performance_20GHz}].} \label{fig:loss_evaluation} \end{figure} The $\delta\tilde{R}$ values corresponding to these points represent the distributed load resistance that would, according to the theory [Eq. \eqref{equ:efficiency_explicit}], yield the observed absorption. As the conductor loss per-unit-length is mainly determined by the wire width $w$ and operating frequency (through the skin depth $\delta_\mathrm{skin}$), with a minor dependency on the capacitor width $W$, we should expect a more-or-less constant $\delta\tilde{R}$ for each one of the plots Fig. \ref{fig:loss_evaluation}(a) and (b). Indeed, Fig. \ref{fig:loss_evaluation}(a) evaluates the conductor loss at ${f=10\mathrm{GHz}}$ to be $\delta\tilde{R}=\left(18.3\pm1.2\right)\times10^{-3}\left[\eta/\lambda\right]$; at ${f=20\mathrm{GHz}}$, the values extracted from Fig. \ref{fig:loss_evaluation}(b) correspond to $\delta\tilde{R}=\left(14.5\pm1.2\right)\times10^{-3}\left[\eta/\lambda\right]$. As from Eqs. \eqref{equ:absorption_approximated} and \eqref{equ:absorption_from_currents} the absorption is approximately proportional to $\delta\tilde{R}$ for a given $\theta_\mathrm{out}$, the $\sim 20\%$ difference between the estimated $\delta\tilde{R}$ values should translate into a $\sim 20\%$ difference in $\eta_\mathrm{loss}$ at the different operating frequencies, in consistency with the results recorded in Tables \ref{tab:metagrating_performance_10GHz} and \ref{tab:metagrating_performance_20GHz}. We compare these assessments with the analytical approximation for conductor resistance in \cite[Eq. (4.11)]{Lee2003}, treating, once more, the flat $w$-wide wire [Fig. \ref{fig:physical_configuration}(c)] as a rounded conductor with an effective radius of $r_\mathrm{eff}=w/4$ \cite{Tretyakov2003}. This results in the following approximated expression for the distributed load resistance \begin{equation} \delta\tilde{R}\approx\frac{1}{2\pi r_\mathrm{eff}\sigma\delta_\mathrm{skin}}, \label{equ:resistance_approximation} \end{equation} where the copper conductivity $\sigma$ is the same as the one used in simulations (Section \ref{subsec:synthesis}), and the skin depth is given by $\delta_\mathrm{skin}=\sqrt{2/\left(2\pi f\mu_0\sigma\right)}$; the vacuum permeability is $\mu_0=4\pi\times10^{-7}\mathrm{\left[H/m\right]}$. For the given conductor width ${w=3\mathrm{mil}=76.2\mathrm{\mu m}}$, this approximation yields ${\delta\tilde{R}=17.3\times10^{-3}\left[\eta/\lambda\right]}$ at $f=10\mathrm{GHz}$, and ${\delta\tilde{R}=12.3\times10^{-3}\left[\eta/\lambda\right]}$ at $f=20\mathrm{GHz}$, in a reasonable agreement with the average values evaluated based on Fig. \ref{fig:loss_evaluation}. These results demonstrate the physical insight and quantitative tools provided by the detailed analytical model, directly relating the actual meta-atom geometry and constituents to the overall device losses. These relations indicate how the beam-splitter absorption can be tuned by suitable modification of the copper features, within the limitations posed by the metagrating configuration corresponding to the desirable split angle. \subsubsection{Reactance deviation and frequency response} \label{subsubsec:reactance} Next, we examine the effect of small deviations from the optimal reactance value [Eq. \eqref{equ:Z_tilde_condition}] on the metagrating performance. In terms of the expressions for the coupling efficiencies defined in Eq. \eqref{equ:efficiency_explicit}, we consider a metagrating with given (constant) conductor losses $\delta\tilde{R}$, and analyze the splitting efficiency $\eta_\mathrm{split}$ as a function of the reactance deviation $\delta\tilde{X}\neq 0$. First, we observe that, regardless of the wire resistance, the maximal splitting efficiency is achieved for $\delta\tilde{X}=0$; in other words, the value of the optimal reactance remains the one given by Eq. \eqref{equ:Z_tilde_condition}, independently of the losses in the system. This is notable, as in many devices, introduction of losses requires recalculation of the optimal reactive components (e.g., as in metasurfaces based on cascaded impedance sheets \cite{Pfeiffer2014_3}). As before, we quantify the device sensitivity to deviation from the optimal set of parameters by calculating the reactance deviation $\delta\tilde{X}_{90\%}$ for which the splitting efficiency decreases to $90\%$ of its maximal value, for a given small distributed resistance $\delta\tilde{R}/\tilde{R}_g\ll1$. Using Eq. \eqref{equ:efficiency_explicit}, we evaluate this value as \begin{equation} \eta_\mathrm{split}=90\%\left.\eta_\mathrm{split}\right|_{\delta\tilde{X}=0}\Rightarrow\left|\delta\tilde{X}_{90\%}\right|\approx\!\frac{1}{3}\tilde{R}_g. \label{equ:90_percent_splitting_point} \end{equation} This result indicates that the device performance is most sensitive to load reactance deviations for working points in which $\sin\theta_\mathrm{out}\sin^2\left(kh\right)$ is minimal [Eq. \eqref{equ:effective_grid_resistance}]. Although this proportionality to $\tilde{R}_g$ is very similar to the one discussed in Subsection \ref{subsubsec:losses} in the context of losses, we would like to offer here a somewhat different perspective to elucidate the origin of this dependency as it applies to reactance deviations. As discussed in the previous subsection, the wire-generated fields experience an image-source interference, affecting the ability to cancel specular reflection for a given induced current, following $\left.E_x^{\mathrm{wire}}\right|_{\mathrm{fund}}=-j\left(\eta/\lambda\right)I\sin\theta_\mathrm{out}\sin\left(kh\right)$ [Eq. \eqref{equ:total_fields_below}]. Similarly, the incident and reflected fields also undergo the same interference effects, such that the total \emph{external} field applied on the wires is ${\left.E_x^\mathrm{ext}\right|_{z=-h}=2jE_\mathrm{in}\sin\left(kh\right)}$ [Eq. \eqref{equ:external_fields}]. Effectively, this is the field that excites the current in the (passive) polarizable loaded wires, as to generate the desirable scattering phenomena. Therefore, when $\sin\left(kh\right)\rightarrow0$, both the external fields and the wire-generated fields destructively interfere at the metagrating plane $z=-h$. In other words, for a given incident field amplitude $E_\mathrm{in}$, the external field at the metagrating plane $\left.E_x^\mathrm{ext}\right|_{z=-h}$ would be very small; thus, it would be very challenging to excite significant currents in the passive loaded wires. On the other hand, for a given induced current $I$, the amplitude of the $n=0$ FB harmonics $\left.E_x^{\mathrm{wire}}\right|_{\mathrm{fund}}$ would also be very small; thus, very high currents would be necessary to generate the fields required to eliminate specular reflection. Overall, around these destructive interference working points, enormous currents are generated by vanishingly-small exciting fields, by design. Consequently, the loaded wires effectively implement a transadmittance amplification system with an extremely-high gain. Therefore, any small deviation from the design specifications, equivalent to a shift in the effective "gain", would cause substantial discrepancies in the induced currents with respect to the required ones; subsequently, a rapid deterioration in the splitting efficiency is expected around these working points. According to the detailed analytical model, the severity of this double destructive interference effect can be quantified by the product of these two factors, namely, ${I/\left.E_x^\mathrm{ext}\right|_{z=-h}=1/\left[2\left(\eta/\lambda\right)\sin\theta_\mathrm{out}\sin^2\left(kh\right)\right]=1/\tilde{R}_g}$, elucidating the dependency observed in Eq. \eqref{equ:90_percent_splitting_point}. We can use Eq. \eqref{equ:90_percent_splitting_point} in conjunction with Eq. \eqref{equ:capacitor_width} to estimate the maximal allowed deviation in the capacitor width that would still retain $\eta_\mathrm{split}$ above $90\%$ of its maximum. The fractional capacitor-width deviation tolerance, $\Delta W/W$, predicted correspondingly, is presented in Fig. \ref{fig:capacitor_width_tolerance_20GHz} as a function of the split angle, for the metagratings synthesized in Section \ref{subsec:synthesis}; for brevity, results are shown only for the designs operating at $f=20\mathrm{GHz}$. Simultaneously, we have extracted from full-wave simulations the actual tolerances obtained for the corresponding physical realizations [Fig. \ref{fig:physical_configuration}(c)]; these are denoted as red circles in Fig. \ref{fig:capacitor_width_tolerance_20GHz}. The good agreement between the predicted and simulated values serves as another verification of the analytical model, demonstrating its efficacy in assessing the performance of a given design in terms of the detailed meta-atom geometrical parameters. Note that the working points in which slightly larger discrepancies occur are the ones for which the analytical model incurs slight errors in predicting the optimal capacitor width to begin with [Fig. \ref{fig:load_design_20GHz}(b)]. \begin{figure}[tb] \includegraphics[width=8cm]{capacitorWidthTolerance_20GHz.pdf}% \caption{Fractional capacitor width tolerance as a function of the splitting angle, for metagratings operating at $f=20\mathrm{GHz}$; the deviation range $\Delta W$ is defined as to guarantee ${\eta_\mathrm{split}\geq90\%}$. Predictions based on Eq. \eqref{equ:90_percent_splitting_point} and Eq. \eqref{equ:capacitor_width} (blue solid lines) are compared to the actual tolerances extracted from full-wave simulations of the physical structure (red circles).} \label{fig:capacitor_width_tolerance_20GHz} \end{figure} A comparison between Fig. \ref{fig:capacitor_width_tolerance_20GHz} and Fig. \ref{fig:grid_resistance} indicates that, as implied by Eq. \eqref{equ:90_percent_splitting_point}, the tolerance to inaccuracies in the load reactance follows closely the trend of $\tilde{R}_g$. Specifically, the most sensitive working points occur for $\theta_\mathrm{out}\rightarrow30^\circ$, $\theta_\mathrm{out}\rightarrow60^\circ$, and $\theta_\mathrm{out}\rightarrow90^\circ$, where $\tilde{R}_g$ approaches its minima, and the highest tolerance is recorded around $\theta_\mathrm{out}\approx58^\circ$ and $\theta_\mathrm{out}\approx77^\circ$, very close to the maxima of $\tilde{R}_g$. Nevertheless, a closer examination reveals that the position of the \emph{global} maximum in the two figures is different. This is due to the fact that the fractional capacitor-width tolerance is dependent also at the nominal value of $W$, corresponding to the nominal reactance at each of the working points (Fig. \ref{fig:load_design_20GHz}); however, these nominal values are not taken into account in Eq. \eqref{equ:90_percent_splitting_point}. Therefore, while the general trends should be very similar, some quantitative differences are expected. The same physical considerations lead us to hypothesize that the tolerance to changes in the operating frequency should also follow a trend similar to that of $\tilde{R}_g$. As discussed after Eq. \eqref{equ:90_percent_splitting_point}, at the points where the double destructive interference occur, the metagrating exhibits an extreme sensitivity to deviations from the nominal design parameters, due to the astronomical by-design induced-current-to-applied-field ratio. Correspondingly, around these working points we would expect the smallest operational bandwidth. Evaluating the $90\%$ splitting efficiency bandwidth in closed form is more complicated, as frequency variations modify the effective splitting angle following Eq. \eqref{equ:Lambda_period}, as well as cause deviations from the relation Eq. \eqref{equ:Z_tilde_condition} between the load impedance and metagrating geometr ; while linearization of the frequency response is possible, the analytical expressions are cumbersome, and yield little physical intuition. On the other hand, the bandwidth can be implicitly evaluated from the analytical model in a straightforward manner, allowing us to probe our hypothesis. To this end, we calculate the scattered fields for metagratings designed at $f=20\mathrm{GHz}$ (i.e. with fixed $h$, $W$, and $\Lambda$, extracted, respectively, from Fig. \ref{fig:hCondition}, Fig. \ref{fig:load_design_20GHz}, and Eq. \eqref{equ:Lambda_period}), excited by normally-incident plane waves at different frequencies. As the distributed reactance at $f=20\mathrm{GHz}$ is known and is capacitive [Fig. \ref{fig:load_design_20GHz}(a)], the load reactance as a function of frequency can be readily deduced by considering the typical inverse proportional dependency in frequency. Hence, the problem at hand reduces to the one of scattering off a \emph{given} loaded wire array in front of a PEC, for which the fields below the metagrating are given by Eq. \eqref{equ:total_fields_below}, with the induced current $I$ evaluated via Eq. \eqref{equ:Ohms_law_Poisson}. The fraction of the incident power coupled to the various FB modes can be subsequently assessed from Eq. \eqref{equ:efficiency_definitions}. Note that when deriving these equations, we did not assume anything regarding the values of the metagrating parameters, making them applicable for the desirable calculation. The fractional $90\%$ splitting-efficiency bandwidth calculated correspondingly from the analytical model is presented in Fig. \ref{fig:frequency_bandwidth_20GHz} (blue solid line), along with the bandwidths extracted from the simulated metagrating geometries (red circles), as a function of the various split angles. The predicted and actual frequency bandwidths agree remarkably, demonstrating the high accuracy of the formulation when applied to realistic physical structures. We note that within the frequency range indicated by $\Delta f$ the splitting efficiency remains very high, although the actual split angle varies with frequency [Eq. \eqref{equ:Lambda_period}]. Towards the edges of the split-angle interval $\left(30^\circ,90^\circ\right)$, frequency changes may drive the $\pm1$ FB modes towards the evanescent spectrum, or allow higher FB modes to be excited, which also limits the achievable bandwidths. These bandwidths may not seem very impressive at first sight; however, one should bear in mind that these refer to $90\%$ performance bandwidths, and not to the typical $50\%$ (or 3dB) performance points. Hence, the values plotted in Fig. \ref{fig:frequency_bandwidth_20GHz} actually correspond to a rather moderate frequency response (at least away from the plot minima), in consistency with the observations of \cite{Radi2017}. \begin{figure}[tb] \includegraphics[width=8cm]{frequencyBandwidth_HR_20GHz.pdf}% \caption{Fractional frequency bandwidth as a function of the splitting angle, for metagratings designed for operation at $f=20\mathrm{GHz}$; the deviation range $\Delta f$ is defined as to guarantee ${\eta_\mathrm{split}\geq90\%}$. Predictions based on the analytical model (blue solid lines) are compared to the actual bandwidth extracted from full-wave simulations of the physical structure (red circles).} \label{fig:frequency_bandwidth_20GHz} \end{figure} Importantly, the evaluated fractional bandwidths confirm our hypothesis, as their trend clearly follows the one of the effective grid resistance [Fig. \ref{fig:grid_resistance}]. Indeed, the working points in which the image-source interference causes high currents to be induced in response to very small applied fields ($\theta_\mathrm{out}\rightarrow30^\circ$, $\theta_\mathrm{out}\rightarrow60^\circ$, and $\theta_\mathrm{out}\rightarrow90^\circ$) exhibit the smallest bandwidths, due to the high sensitivity to small variations in the design parameters [see discussion after Eq. \eqref{equ:90_percent_splitting_point}]. On the other hand, away from these points of destructive interference, the device performance is quite stable with respect to moderate frequency variations, up to the inevitable change in the split angle \section{Conclusion} \label{sec:conclusion} To conclude, we have presented a detailed analytical model for metagrating beam splitters, based on loaded conducting wire arrays. With respect to previous reports, the formulation describes electrically-polarizable metagratings excited by TE-polarized fields, more practical for realization of planar devices, and derives explicit relations between the device performance parameters and the individual meta-atom load, including realistic losses. From a synthesis perspective, these relations allow an almost-analytical prediction of the required meta-atom geometry, significantly reducing the design effort. From an analysis point of view, the ability to naturally integrate conductor losses, and deviations from the nominal reactance and frequency operating conditions, provide a convenient analytical framework to investigate the effects of these parasitics on the metagrating performance. Specifically, we have revealed that the metagratings feature distinct preferable working points. Both in terms of losses, as well in terms of reactance deviation and frequency response, designs that operate close to the points where the effective grid resistance $\tilde{R}_g$ tends to zero are more prone to significant performance reduction, exhibiting extremely high sensitivity to conductor losses, load geometry inaccuracies, and frequency shifts. Relying on the analytical derivation, we have shown that these phenomena stem from fundamental interference processes taking place in the device. At these wire-PEC separation distances where destructive interference occurs for both the incident and wire-generated fields, extremely-high currents are expected to be excited by overall extremely-low effective fields. These extreme operating conditions lead to high sensitivity to design parameters as well as to significant losses, due to the large by-design transadmittance "gain" and large conducted currents. These physical effects are very basic and general, and thus are expected to be observed in any metagrating system of this sort. Interestingly, these problematic working points are not correlated with the typical challenging operating conditions of beam-manipulating metasurfaces \cite{Epstein2014, Epstein2014_2, Wong2016, Epstein2016_3, Epstein2016_4, Asadchy2016}, in which performance reduction is commonly associated with large wave-impedance mismatch. In fact, for the investigated metagrating devices, some of the best working points actually occur for extremely wide-angle beam splitting. The detailed model, verified with full-wave simulations of realistic physical structures, thus provides both a set of efficient semianalytical tools for synthesis and analysis, and physical insight regarding the dominant processes taking place within the device. Our observations also highlight the immense potential of these devices for a variety of wave-manipulating devices, in consistency with previous reports \cite{Sounas2016, Wong2017, Memarian2017, PaniaguaDominguez2017, Radi2017, Wong2017_1}. In particular, when suitable working points are chosen, these metagratings can split a normally-incident beam into two equal-power beams propagating at very large oblique angles ($\sim80^\circ$) with minimal absorption, moderate bandwidth, and substantial resilience to fabrication inaccuracies. In fact, such a perfect wide-angle reflect-mode beam-splitting is still considered a very challenging problem to solve accurately with conventional metasurfaces \cite{Estakhri2016,Estakhri2016_1}, even though metagratings feature a much simpler structure, requiring only the design of a single meta-atom (which can be done semianalytically following our derivation). Finally, it is important to note that although the synthesis and analysis presented herein were demonstrated using metagratings operating at microwave frequencies, the derivation and observations are not restricted to this frequency range. More than that, the same meta-atom structures have been used in the past to devise metasurfaces for terahertz and optical applications \cite{Zhao2012, Pfeiffer2014, Kuznetsov2015, Chang2017}. Hence, the presented analytical model could facilitate effective semianalytical design of novel low-loss, robust, ultrathin devices for field manipulation across the electromagnetic spectrum, with the highlighted physical observations guiding the synthesis to enhance performance by judicious choice of working points.
2103.13756
\section{An example appendix} \section*{Acknowledgments} Financial support from the Deutsche Forschungsgemeinschaft (German Research Foundation) through grant IRTG 2379 is gratefully acknowledged. \section{Software list}\label{sec:table} We present a list of packages that support some form of tensor computations. To be considered for inclusion, a package must offer functionality in at least one of the following categories. \begin{itemize} \item \textbf{Data Manipulation} (DatM): Any operation related to the layout or storage of tensors, such as tensor transposition, reshaping, conversion between different storage formats, \dots \item \textbf{Element-Wise Operations} (EWOps): Any kind of element-by-element operation such as addition/subtraction, and/or reductions such as norms, min, max, \dots \item \textbf{Contractions} (Con): General contractions between two or more tensors. Currently the survey does not differentiate between binary, ternary, or hypercontrations. \item \textbf{Specific Contractions} (SpecCon): Specific operations that qualify as specific contractions, e.g., Tensor Times Vector (TTV), Tensor Times Matrix (TTM), Matricized Tensor Times Khatri-Rao Product (MTTKRP), \dots \item \textbf{Decompositions} (Decomp): At least one tensor decomposition, including but not limited to the Canonical Polyadic Decomposition (CPD or CP, also known as PARAllel FACtors analysis, PARAFAC), the Tucker Decomposition, Tensor Train, and their variants. \end{itemize} The following are the key aspects of a package that we aim to focus on. \begin{itemize} \item \textbf{Language}: What language is it written in, and, in the case of compilers/transpilers, what language does it generate code in (denoted by a $\rightarrow$)? \item \textbf{Tensor type}: What type of tensor does it operate on? (e.g., Dense (D), Sparse (S), BlockSparse (BS), symmetric, supersymmetric, \dots) \item \textbf{Target system}: What types of computing architecture does it target? (e.g., CPU (C), GPU (G), Distributed Memory (D), \dots). Note: For CPU, there is currently no distinction between single-threaded and multi-threaded implementations. \item \textbf{Functionality}: Which of the categories mentioned above does it support? \end{itemize} In the following tables, packages are listed alphabetically. For each package, Table~\ref{tab:big-table} provides when available a hyperlink to the source code (click on the package name), and a reference to a publication or website. Additionally, each package is listed with an ID number. Tables~\ref{tab:fucntionality-dm},~\ref{tab:fucntionality-con}, and~\ref{tab:fucntionality-decomp} group all packages according to the categores DatM, Con, and Decomp, respectively. Finally, Table~\ref{tab:fucntionality-complete} attempts to gather the ``more complete'' packages, i.e., those that offer support for at least four out of the five categories described above. \input{sections/big-table.tex} \medskip \begin{longtable}{llccccc} \toprule ID & Name & \multicolumn{5}{c}{Functionality} \\ & & DatM & EWOps & SpecCon & Con & Decomp \\ \midrule \endhead \input{data/table-func-dm.data} \bottomrule \caption{\label{tab:fucntionality-dm}Packages that support Data Manipulation (DatM).} \end{longtable} \newpage \begin{longtable}{llccccc} \toprule ID & Name & \multicolumn{5}{c}{Functionality} \\ & & DatM & EWOps & SpecCon & Con & Decomp \\ \midrule \endhead \input{data/table-func-con.data} \bottomrule \caption{\label{tab:fucntionality-con}Packages that support Contractions (Con).} \end{longtable} \newpage \begin{longtable}{llcccc} \toprule ID & Name & \multicolumn{4}{c}{Decompositions} \\ & & CP & Tucker & TensorTrain & Other \\ \midrule \endhead \input{data/table-func-decomp.data} \bottomrule \caption{\label{tab:fucntionality-decomp}Packages that support Decompositions (Decomp).} \end{longtable} \newpage \begin{longtable}{lllccccc} \toprule ID & Name & Language & \multicolumn{5}{c}{Functionality} \\ & & & DatM & EWOps & SpecCon & Con & Decomp \\ \midrule \endhead \input{data/table-func-complete.data} \bottomrule \caption{\label{tab:fucntionality-complete}Packages that offer functionality in at least four out of the five categories listed in Section~\ref{sec:table}.} \end{longtable} \subsection{Notable omissions} Certain packages that are well known in the community for offering tensor contractions and other operations include TAMM~\cite{TAMM} and TCE \cite{TCE}. These packages were not included in the list, since both are implemented as components of a larger project, NWChem \cite{NWChem} (a software primarily targeted towards computational chemistry), and are not usable indepentently. Furthermore, while many tensor operations can be cast in terms of BLAS and LAPACK calls, e.g.,~\cite{Di_Napoli2014:210,Di_Napoli2017:318,Peise2015:380}, in this survey we only focus on packages that support multi-dimensional arrays. \section{Introduction}\label{sec:introduction} Similar to matrices, tensors arise in a multitude of disciplines in engineering and science---for instance, in computational chemistry, computational physics, chemometrics, data science, signal processing, and machine learning~\cite{Kolda,Sidiropoulos,Common,Bro,Rabanser,Cichocki}---and naturally, significant effort goes into the development of numerical software. However, in sharp contrast to the software landscape for (dense) matrix computations, which is nicely layered and organized, that of tensor computations is fragmented and largely unstructured. Indeed, the tensor counterparts to the universally-used libraries such as BLAS---collection of building blocks---and LAPACK---collection of solvers---are still missing. When surveying the landscape of libraries, packages, compilers, and toolboxes\footnote{For the sake of simplicity, in the rest of this manuscript we will refer to all of these types of software simply as ``packages''.} for tensor computations, a massive replication of effort becomes apparent, and the absence of building blocks libraries is certainly one of the main causes for this. Other reasons are to be associated to the fact that tensor software is mostly driven by applications, and is therefore scattered among different communities and scientific outlets. It could be argued that matrix computations are possibly even more widespread, yet a community effort made it possible to create ``collection'' libraries and standardize interfaces already in the 1970s. The profound difference is that while the language (and notation) for linear algebra (i.e., matrix computations) is quite consistent across disparate disciplines, the same cannot be further from the truth when it comes to the language of multi-linear algebra and multi-dimensional arrays (i.e., tensor computations). Even the most basic concepts, such as the number and the length of the ``axes'' of a tensor, have entirely different (and often conflicting) names in different disciplines. In short, the development of tensor software has often been carried out independently in different communities, and even within the same community there has not been any real coordinated effort. Motivated by this observation, we set out to survey the software landscape for tensor computations, aiming to a) create awareness among users of the existing packages, and b) guide the development at large. We see this survey as an essential step towards finding common ground between different applications, and towards identifying possible divisions of concerns, with the ultimate goal of defining a set of fundamental computational building blocks. This survey takes up from a variety of communities, which often do not code in the same programming language, and, perhaps more importantly, present notable differences in the symbols and nomenclature they use to describe tensor operations. In this document, we do \emph{not} address those differences, \emph{nor} aim to rank packages qualitatively, in any way. Instead, we are merely attempting to put this diverse and large set of software packages on the map, with a loose classification of the functionality they provide. This document is very much a work in progress, and we plan to keep it up-to-date by uploading new versions with some regularity. To this end, we welcome and encourage input, contributions and corrections, to help create a more complete and fair snapshot of the current tensor software landscape. We invite readers to send us contributions via email, and would greatly appreciate consulting the questionnaire in Appendix \ref{app:questionnaire}. \section{Questionnaire} \label{app:questionnaire} Any cross-disciplinary investigation of this size is bound to be incomplete and to contain mistakes. We therefore kindly ask the reader to help us by emailing corrections and additions to \email{pauldj@cs.umu.se}. When providing information about a package, please consider the following questions. \begin{enumerate} \item What is the name of the package? \item Where is the source code located? \item Would you please provide a reference (preferably in BibTeX format) to a publication, preprint, or website about the package? \item In which programming language(s) is the package written? \item Which programming language(s) do users have to code in to use the package? \item Is the package standalone? Alternatively, does it depend on another package that is either in the list or that belongs in the list? \item What is the target computing architecture (CPUs, GPUs, Distributed Memory, others)? \item Does the package support layout-related operations, such as tensor transpositions, or reshaping, \dots? \item Does it support element-wise operations, such as addition/subtraction, reductions, \dots? \item Does it support general binary contractions? If not, what are the limitations? \item Does it support only a specific, subset of contractions? (e.g., TTV, TTM, MTTKRP) \item Does it support other types of contractions? Which one(s)? \item Does it support tensor decompositions? If so, please provide a comma separated list of decompositions supported. \item Would you please describe in 1-2 sentences what the package is about, what target problem it addresses, and what functionalities it provides? \item Is there any other information about the package that you deem essential in order to describe its functionality? \end{enumerate} Thank you for your help! \section{Conclusion}\label{sec:conclusion} We provide a survey of tensor packages which arise in a wide range of fields and applications. Most of the packages are written in (or target) one of a handful of well-known programming languages, and are standalone, i.e., they do not depend on one another. This means that many packages (re-)implement, often sub-optimally, the same or similar functionality within their own codebase (e.g., tensor transposition, and specific operations such as MTTKRP and TTV). With this list we aim to help both (new) users in finding a suitable package for their needs, and developers in identifying opportunities for cooperation, modularity, and optimization. Ultimately, our goal is to create awareness about the level of redundancy that permeates the software landscape of tensor computations, and the potential implications on software quality, performance, and productivity. Furthermore, we see this survey as a first step to pave the way towards a set of universal, optimized, building blocks, which shall play the same role as the one that BLAS and LAPACK have played (and are playing) in the domain of numerical linear algebra.
1301.4561
\section{Introduction} \subsection{Tautological classes} For $g\geq 2$, let ${\mathcal{M}}_g$ be the moduli space of nonsingular, projective, genus $g$ curves over ${\mathbb C}$, and let \begin{equation}\label{g55} \pi: \mathcal{C}_g \rightarrow \mathcal{M}_g \end{equation} be the universal curve. We view ${\mathcal{M}}_g$ and $\mathcal{C}_g$ as nonsingular, quasi-projective, Deligne-Mumford stacks. However, the orbifold perspective is sufficient for most of our purposes. The relative dualizing sheaf $\omega_\pi$ of the morphism \eqref{g55} is used to define the cotangent line class $$\psi = c_1(\omega_\pi)\in A^1(\mathcal{C}_g,\mathbb{Q})\ .$$ The $\kappa$ classes are defined by push-forward, $$\kappa_r = \pi_*( \psi^{r+1}) \in A^{r}({\mathcal{M}}_g)\ .$$ The {\em tautological ring} $$R^*({\mathcal{M}}_g) \subset A^*({\mathcal{M}}_g, \mathbb{Q})$$ is the $\mathbb{Q}$-subalgebra generated by all of the $\kappa$ classes. Since $$\kappa_{0}= 2g-2 \in \mathbb{Q}$$ is a multiple of the fundamental class, we need not take $\kappa_0$ as a generator. There is a canonical quotient $$\mathbb{Q}[\kappa_1,\kappa_2, \kappa_3, \ldots] \stackrel{q}{\longrightarrow} R^*({\mathcal{M}}_g) \longrightarrow 0\ .$$ We study here the ideal of relations among the $\kappa$ classes, the kernel of $q$. We may also define a tautological ring $RH^*({\mathcal{M}}_g) \subset H^*({\mathcal{M}}_g,\mathbb{Q})$ generated by the $\kappa$ classes in cohomology. Since there is a natural factoring $$\mathbb{Q}[\kappa_1,\kappa_2, \kappa_3, \ldots] \stackrel{q}{\longrightarrow} R^*({\mathcal{M}}_g) \stackrel{c}{\longrightarrow} RH^*({\mathcal{M}}_g)$$ via the cycle class map $c$, algebraic relations among the $\kappa$ classes are also cohomological relations. Whether or not there exist {\em more} cohomological relations is not yet settled. There are two basic motivations for the study of the tautological rings $R^*({\mathcal{M}}_g)$. The first is Mumford's conjecture, proven in 2002 by Madsen and Weiss \cite{MW}, $$\lim_{g\rightarrow \infty} H^*({\mathcal{M}}_g, \mathbb{Q}) \ = \mathbb{Q}[\kappa_1, \kappa_2, \kappa_3, \ldots ] ,$$ determining the {\em stable} cohomology of the moduli of curves. While the $\kappa$ classes do not exhaust $H^*({\mathcal{M}}_g,\mathbb{Q})$, there are no other stable classes. The study of $R^*({\mathcal{M}}_g)$ undertaken here is from the opposite perspective --- we are interested in the ring of $\kappa$ classes for fixed $g$. The second motivation is from a large body of cycle class calculations on ${\mathcal{M}}_g$ (often related to Brill-Noether theory). The answers invariably lie in the tautological ring $R^*({\mathcal{M}}_g)$. The first definition of the tautological rings by Mumford \cite{M} was at least partially motivated by such algebro-geometric cycle constructions. \subsection{Faber-Zagier conjecture} Faber and Zagier have conjectured a remarkable set of relations among the $\kappa$ classes in $R^*({\mathcal{M}}_g)$. Our main result is a proof of the Faber-Zagier relations, stated as Theorem 1 below, by a geometric construction involving the virtual class of the moduli space of stable quotients. To write the Faber-Zagier relations, we will require the following notation. Let the variable set $$\mathbf{p} = \{\ p_1,p_3,p_4,p_6,p_7,p_9,p_{10}, \ldots\ \}$$ be indexed by positive integers {\em not} congruent to $2$ modulo $3$. Define the series \begin{multline*} \Psi(t,\mathbf{p}) = (1+tp_3+t^2p_6+t^3p_9+\ldots) \sum_{i=0}^\infty \frac{(6i)!}{(3i)!(2i)!} t^i \\ +(p_1+tp_4+t^2p_7+\ldots) \sum_{i=0}^\infty \frac{(6i)!}{(3i)!(2i)!} \frac{6i+1}{6i-1} t^i \ . \end{multline*} Since $\Psi$ has constant term 1, we may take the logarithm. Define the constants $C_r^{\text{\tiny{{\sf FZ}}}}(\sigma)$ by the formula $$\log(\Psi)= \sum_{\sigma} \sum_{r=0}^\infty C_r^{\text{\tiny{{\sf FZ}}}}(\sigma)\ t^r \mathbf{p}^\sigma \ . $$ The above sum is over all partitions $\sigma$ of size $|\sigma|$ which avoid parts congruent to 2 modulo 3. The empty partition is included in the sum. To the partition $\sigma=1^{n_1}3^{n_3}4^{n_4} \cdots$, we associate the monomial $\mathbf{p}^\sigma= p_1^{n_1}p_3^{n_3}p_4^{n_4}\cdots$. Let $$\gamma^{\text{\tiny{{\sf FZ}}}} = \sum_{\sigma} \sum_{r=0}^\infty C_r^{\text{\tiny{{\sf FZ}}}}(\sigma) \ \kappa_r t^r \mathbf{p}^\sigma \ . $$ For a series $\Theta\in \mathbb{Q}[\kappa][[t,\mathbf{p}]]$ in the variables $ \kappa_i$, $t$, and $p_j$, let $[\Theta]_{t^r \mathbf{p}^\sigma}$ denote the coefficient of $t^r\mathbf{p}^\sigma$ (which is a polynomial in the $\kappa_i$). \begin{Theorem} \label{dddd} { In $R^r({\mathcal{M}}_g)$, the Faber-Zagier relation $$ \big[ \exp(-\gamma^{\text{\tiny{{\sf FZ}}}}) \big]_{t^r \mathbf{p}^\sigma} = 0$$ holds when $g-1+|\sigma|< 3r$ and $g\equiv r+|\sigma|+1 \mod 2$.} \end{Theorem} The dependence upon the genus $g$ in the Faber-Zagier relations of Theorem \ref{dddd} occurs in the inequality, the modulo 2 restriction, and via $\kappa_0=2g-2$. For a given genus $g$ and codimension $r$, Theorem \ref{dddd} provides only {\em finitely} many relations. While not immediately clear from the definition, the $\mathbb{Q}$-linear span of the Faber-Zagier relations determines an ideal in $\mathbb{Q}[\kappa_1,\kappa_2, \kappa_3, \ldots]$ --- the matter is discussed in Section \ref{pppp} and a subset of the Faber-Zagier relations generating the same ideal is described. As a corollary of our proof of Theorem \ref{dddd} via the moduli space of stable quotients, we obtain the following stronger boundary result. If $g-1+|\sigma|< 3r$ and $g\equiv r+|\sigma|+1 \mod 2$, then \begin{equation} \big[ \exp(-\gamma^{\text{\tiny{{\sf FZ}}}}) \big]_{t^r \mathbf{p}^\sigma} \in R^*(\partial\overline{\mathcal{M}}_g)\ . \end{equation} Not only is the Faber-Zagier relation 0 on $R^*(\mathcal{M}_g)$, but the relation is equal to a tautological class on the boundary of the moduli space $\overline{\mathcal{M}}_g$. A precise conjecture for the boundary terms has been proposed in \cite{Pix}. \subsection{Gorenstein rings} By results of Faber \cite{Faber} and Looijenga \cite{L}, we have \begin{equation}\label{gvvg} \text{dim}_\mathbb{Q}\ R^{g-2}({\mathcal{M}}_g) =1, \ \ \ R^{>g-2}({\mathcal{M}}_g) =0 .\ \end{equation} A canonical parameterization of $R^{g-2}({\mathcal{M}}_g)$ is obtained via integration. Let $$\mathbb{E} \rightarrow {\mathcal{M}}_g$$ be the {\em Hodge bundle} with fiber $H^0(C,\omega_C)$ over the moduli point $[C]\in {\mathcal{M}}_g$. Let $\lambda_k$ denote the $k^{th}$ Chern class of $\mathbb{E}$. The linear map $$\epsilon: \mathbb{Q}[\kappa_1,\kappa_2, \kappa_3, \ldots] \longrightarrow \mathbb{Q}, \ \ \ \ \ \ \ f(\kappa) \stackrel{\epsilon}{\longmapsto} \int_{\overline{{\mathcal{M}}}_g} f(\kappa) \cdot \lambda_g\lambda_{g-1} $$ factors through $R^*({\mathcal{M}}_g)$ and determines an isomorphism $$\epsilon: R^{g-2}({\mathcal{M}}_g) \cong \mathbb{Q}$$ via the non-trivial evaluation \begin{equation} \label{sdffds} \int_{\overline{M}_{g}} \kappa_{g-2} \lambda_g \lambda_{g-1} = \frac{1}{2^{2g-1}(2g-1)!!} \frac{|B_{2g}|}{2g} \ . \end{equation} A survey of the construction and properties of $\epsilon$ can be found in \cite{FPlog}. The evaluations under $\epsilon$ of all polynomials in the $\kappa$ classes are determined by the following formulas. First, the Virasoro constraints for surfaces \cite{GP} imply a related evaluation previously conjectured in \cite{Faber}: \begin{equation} \label{lamgg} \int_{\overline{M}_{g,n}} \psi_1^{\alpha_1} \cdots \psi_n^{\alpha_n} \lambda_g \lambda_{g-1} = \frac{(2g+n-3)! (2g-1)!!}{(2g-1)!\prod_{i=1}^n (2\alpha_i-1)!!} \int_{\overline{M}_{g}} \kappa_{g-2} \lambda_g \lambda_{g-1}, \end{equation} where $\alpha_i>0$. Second, a basic relation (due to Faber) holds: \begin{equation}\label{ffbb} \int_{\overline{M}_{g,n}} \psi_1^{\alpha_1} \cdots \psi_n^{\alpha_n} \lambda_g \lambda_{g-1} = \sum_{\sigma\in \mathsf{S}_n} \int_{\overline{M}_g} \kappa_\sigma \lambda_g\lambda_{g-1}\ . \end{equation} The sum on the right is over all elements of the symmetric group $\mathbb{S}_n$, $$\kappa_\sigma = \kappa_{|c_1|} \ldots \kappa_{|c_r|}$$ where $c_1,\ldots ,c_r$ is the set partition obtained from the cycle decomposition of $\sigma$, and $$|c_i|= \sum_{j\in c_i} (\alpha_j -1)\ .$$ Relation \eqref{ffbb} is triangular and can be inverted to express the $\epsilon$ evaluations of the $\kappa$ monomials in terms of \eqref{lamgg}. Computations of the tautological rings in low genera led Faber to formulate the following conjecture in 1991. \begin{Conjecture} For all $g\geq 2$ and all $0 \leq k \leq g-2$, the pairing \begin{equation} \label{PPP} R^{k}({\mathcal{M}}_g) \times R^{g-2-k}({\mathcal{M}}_g) \xrightarrow{\ \ \epsilon \ \circ\ \cup\ \ } \mathbb{Q}\end{equation} is perfect. \end{Conjecture} \noindent The pairing \eqref{PPP} is the ring multiplication $\cup$ of $R^*({\mathcal{M}}_g)$ composed with $\epsilon$. A perfect pairing identifies the first vector space with the dual of the second. If Faber's conjecture is true in genus $g$, then $R^*({\mathcal{M}}_g)$ is a Gorenstein local ring. Let $\mathcal{I}_g \subset R^*({\mathcal{M}}_g)$ be the ideal determined by the kernel of the pairing \eqref{PPP} in Faber's conjecture. Define the {\em Gorenstein quotient} $$R^*_{\mathrm{G}}({\mathcal{M}}_g) = \frac{R^*({\mathcal{M}}_g)}{\mathcal{I}_g}\ .$$ If Faber's conjecture is true for $g$, then $\mathcal{I}_g=0$ and $R^*_{\mathrm{G}}({\mathcal{M}}_g)= R^*({\mathcal{M}}_g)$. The pairing \eqref{PPP} can be evaluated directly on polynomials in the $\kappa$ classes via \eqref{sdffds}-\eqref{ffbb}. The Gorenstein quotient $R_{\mathrm{G}}^*({\mathcal{M}}_g)$ is completely determined by the $\kappa$ evaluations and the ranks \eqref{gvvg}. The ring $R_{\mathrm{G}}^*({\mathcal{M}}_g)$ can therefore be studied as a purely algebro-combinatorial object. Faber and Zagier conjectured the relations of Theorem \ref{dddd} from a concentrated study of the Gorenstein quotient $R^*_{\mathrm{G}}({\mathcal{M}}_g)$. The Faber-Zagier relations were first written in 2000 and were proven to hold in $R^*_{\mathrm{G}}({\mathcal{M}}_g)$ in 2002. The validity of the Faber-Zagier relations in $R^*({\mathcal{M}}_g)$ has been an open question since then. \subsection{Other relations?} By substantial computation, Faber has verified Conjecture 1 holds for genus $g< 24$. Moreover, his calculations show the Faber-Zagier set yields {\em all} relations among $\kappa$ classes in $R^*({\mathcal{M}}_g)$ for $g< 24$. However, he finds the Faber-Zagier relations of Theorem \ref{dddd} do {\em not} yield a Gorenstein quotient in genus 24. Let $$\mathsf{FZ}_g \subset \mathbb{Q}[\kappa_1, \kappa_2, \kappa_3, \ldots]$$ be the ideal determined by the Faber-Zagier relations of Theorem \ref{dddd}, and let $$R^*_{\mathrm{FZ}}({\mathcal{M}}_g) = \frac{\mathbb{Q}[\kappa_1, \kappa_2, \kappa_3, \ldots]}{\mathsf{FZ}_g}\ .$$ Faber finds a mismatch in codimension 12, \begin{equation}\label{ineq} R^{12}_{\mathrm{FZ}}({\mathcal{M}}_{24}) \neq R^{12}_{\mathrm{G}}({\mathcal{M}}_{24})\ . \end{equation} Exactly 1 more relation holds in the Gorenstein quotient. To the best of our knowledge, a relation in $R^*({\mathcal{M}}_g)$ which is not in the span of the Faber-Zagier relations of Theorem \ref{dddd} has not yet been found. The following prediction is consistent with all present calculations. \begin{Conjecture} For all $g\geq 2$, the kernel of $$\mathbb{Q}[\kappa_1,\kappa_2, \kappa_3, \ldots] \stackrel{q}{\longrightarrow} R^*({\mathcal{M}}_g) \longrightarrow 0$$ is the Faber-Zagier ideal $\mathsf{FZ}_g$. \end{Conjecture} \noindent Conjectures 1 and 2 are both true for $g<24$. By the inequality \eqref{ineq}, Conjectures 1 and 2 can {\em not} both be true for all $g$. Which is false? Finally, we note the above discussion might have a different outcome if the tautological ring $RH^*({\mathcal{M}}_g)$ in cohomology is considered instead. Perhaps there are more relations in cohomology? These questions provide a very interesting line of inquiry. \subsection{Plan of the paper} We start the paper in Section \ref{FFF} with a modern treatment of Faber's classical construction of relations among the $\kappa$ classes. The result, in Wick form, is stated as Theorem \ref{ttt} of Section \ref{wickf}. While the outcome is an effective source of relations, their complexity has so far defied a complete analysis. After reviewing stable quotients on curves in Section \ref{stq}, we derive an explicit set of $\kappa$ relations from the virtual geometry of the moduli space of stable quotients in Section \ref{MOP}. The resulting equations are more tractable than those obtained by classical methods. In a series of steps, the stable quotient relations are transformed to simpler and simpler forms. The first step, Theorem \ref{mmnn}, comes almost immediately from the virtual localization formula \cite{GrP} applied to the moduli space of stable quotients. After further analysis in Section \ref{LLL}, the simpler form of Proposition \ref{better} is found. A change of variables is applied in Section \ref{trans} that transforms the relations to Proposition \ref{best}. Our final result, Theorem \ref{dddd}, establishes the previously conjectural set of tautological relations proposed more than a decade ago by Faber and Zagier. The proof of Theorem \ref{dddd} is completed in Section \ref{pppp}. A natural question is whether Theorem \ref{dddd} can be extended to yield explicit relations in the tautological ring of $\overline{{\mathcal{M}}}_{g,n}$. A precise conjecture of exactly such an extension is given in \cite{Pix}. There is no doubt that our methods here can also be applied to investigate tautological relations in $\overline{{\mathcal{M}}}_{g,n}$. Whether the simple form of \cite{Pix} will be obtained remains to be seen. A different method, valid only in cohomology, of approaching the conjecture of \cite{Pix} is pursued in \cite{PPZ}. \subsection{Acknowledgements} We first presented our proof of the Faber-Zagier relations in a series of lectures at Humboldt University in Berlin during the conference {\em Intersection theory on moduli space} in 2010. A detailed set of notes, which is the origin of the current paper, is available \cite{PP}. We thank G. Farkas for the invitation to speak there. Discussions with C. Faber played an important role in the development of our ideas. The research reported here was done during visits of A.P. to IST Lisbon during the year 2010-11. The paper was written at ETH Z\"urich during the year 2011-12. R.P. was supported in Lisbon by a Marie Curie fellowship and a grant from the Gulbenkian foundation. In Z\"urich, R.P. was partially supported by the Swiss National Science Foundation grant SNF 200021143274. A.P. was supported by a NDSEG graduate fellowship. \section{Classical vanishing relations} \label{FFF} \subsection{Construction} Faber's original relations in his article {\em Conjectural description of the tautological ring} \cite{Faber} are obtained from a very simple geometric construction. As before, let $$\pi: \mathcal{C}_g \rightarrow \mathcal{M}_g$$ be the universal curve over the moduli space, and let $$\pi^d: \mathcal{C}^d_g \rightarrow \mathcal{M}_g $$ be the map associated to the $d^{th}$ fiber product of the universal curve. For every point $[C,p_1,\ldots,p_d]\in\mathcal{C}^d_g$, we have the restriction map \begin{equation}\label{gy77} H^0(C,\omega_C) \rightarrow H^0(C,\omega_C|_{p_1+\ldots+p_d})\ . \end{equation} Since the canonical bundle $\omega_C$ has degree $2g-2$, the restriction map is injective if $d>2g-2$. Let $$\Omega_d \rightarrow \mathcal{C}_g^d$$ be the rank $d$ bundle with fiber $H^0(C,\omega_C|_{p_1+\ldots+p_d})$ over the moduli point $[C,p_1,\ldots,p_d]\in\mathcal{C}^d_g$. If $d>2g-2$, the restriction map \eqref{gy77} yields an exact sequence over $\mathcal{C}^d$, $$ 0 \rightarrow \mathbb{E} \rightarrow \Omega_{d} \rightarrow Q_{d-g} \rightarrow 0$$ where $\mathbb{E}$ is the rank $g$ Hodge bundle and $Q_{d-g}$ is the quotient bundle of rank $d-g$. We see $$c_k(Q_{d-g}) = 0\in A^k(\mathcal{C}^d_g)\ \ \ \text{for} \ \ \ k>d-g \ .$$ After cutting the vanishing Chern classes $c_k(Q_{d-g})$ down with cotangent line and diagonal classes in $\mathcal{C}^d_g$ and pushing-forward via $\pi^d_*$ to ${\mathcal{M}}_g$, we arrive at Faber's relations in $R^*({\mathcal{M}}_g)$. \subsection{Wick form} \label{wickf} From our point of view, at the center of Faber's relations in \cite{Faber} is the function $$\Theta(t,x) = \sum_{d=0}^\infty \prod_{i=1}^d {(1+it)} \ \frac {(-1)^d}{d!} \frac{x^d}{t^{d}} \ .$$ The differential equation $$ t(x+1)\frac{d}{dx} \Theta + (t+1) \Theta = 0 \ $$ is easily found. Hence, we obtain the following result. \begin{Lemma} $\Theta = (1+x)^{-\frac{t+1}{t}}\ .$ \end{Lemma} We introduce a variable set $\mathbf{z}$ indexed by pairs of integers $$\mathbf{z} = \{ \ {z}_{i,j} \ | \ i \geq 1, \ \ j\geq i-1 \ \} \ .$$ For monomials $$\mathbf{z}^\sigma = \prod_{i,j} z_{i,j}^{\sigma_{i,j}},$$ we define $$\ell(\sigma) = \sum_{i,j} i \sigma_{i,j}, \ \ \ |\sigma| = \sum_{i,j} j \sigma_{i,j} \ .$$ Of course $|\text{Aut}(\sigma)| = \prod_{i,j} \sigma_{i,j} !$ \ . The variables $\mathbf{z}$ are used to define a differential operator $$ \mathcal{D} = \sum_{i,j} z_{i,j}\ t^j \left( x\frac{d}{dx}\right) ^i\ .$$ After applying $\exp(\mathcal{D})$ to $\Theta$, we obtain \begin{eqnarray*} \Theta^{\mathcal{D}} & = & \exp(\mathcal{D})\ \Theta \\ & = & \sum_\sigma \sum_{d=0}^\infty \prod_{i=1}^d {(1+it)} \ \frac {(-1)^d}{d!} \frac{x^d}{t^{d}}\ \frac{d^{\ell(\sigma)} t^{|\sigma|} {\mathbf{z}}^{\sigma}} {|\text{Aut}(\sigma)|} \end{eqnarray*} where $\sigma$ runs over all monomials in the variables $\mathbf{z}$. Define constants $C^d_r(\sigma)$ by the formula $$\log(\Theta^{\mathcal{D}})= \sum_{\sigma} \sum_{d=1}^\infty \sum_{r=-1}^\infty C^d_{r}(\sigma)\ t^r \frac{x^d}{d!} \mathbf{z}^\sigma \ .$$ By an elementary application of Wick's formula (as explained in Section \ref{cc12} below), the $t$ dependence of $\log(\Theta^{\mathcal{D}})$ has at most simple poles. Finally, we consider the following function, \begin{equation} \gamma^{\text{\tiny{{\sf F}}}}= \sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1} + \sum_{\sigma} \sum_{d=1}^\infty \sum_{r=-1}^\infty C^d_r(\sigma) \ \kappa_r t^r \frac{x^d}{d!} \mathbf{z}^\sigma \ . \label{fafa} \end{equation} The Bernoulli numbers appear in the first term, $$\sum_{k=0}^ \infty B_k \frac{u^k}{k!}= \frac{u}{e^u-1}\ .$$ Denote the $t^rx^d \mathbf{z}^\sigma$ coefficient of $\exp(-\gamma^{\text{\tiny{{\sf F}}}})$ by $$\big[ \exp(-\gamma^{\text{\tiny{{\sf F}}}}) \big]_{t^rx^d \mathbf{z}^\sigma} \in \mathbb{Q}[\kappa_{-1}, \kappa_0,\kappa_1, \kappa_2, \ldots] \ .$$ Our form of Faber's equations is the following result. \begin{Theorem}\label{ttt} In $R^r({\mathcal{M}}_g)$, the relation $$ \big[ \exp(-\gamma^{\text{\tiny{{\sf F}}}}) \big]_{t^rx^d \mathbf{z}^\sigma} = 0$$ holds when $r>-g+|\sigma|$ and $d>2g-2$. \end{Theorem} In the tautological ring $R^*({\mathcal{M}}_g)$, the standard conventions $$\kappa_{-1}=0, \ \ \ \ \kappa_{0}=2g-2$$ are followed. For fixed $g$ and $r$, Theorem \ref{ttt} provides infinitely many relations by increasing $d$. The variables $z_{i,j}$ efficiently encode both the cotangent and diagonal operations studied in \cite{Faber}. In particular, the relations of Theorem \ref{ttt} are equivalent to a mixing of all cotangent and diagonal operations studied there. The proof of Theorem \ref{ttt} is presented in Section \ref{ppp}. \vspace{10pt} While Theorem \ref{ttt} has an appealingly simple geometric origin, the relations do not seem to fit the other forms we will see later. In particular, we do not know how to derive Theorem \ref{dddd} from Theorem \ref{ttt}. Extensive computer calculations by Faber suggest the following. \begin{Conjecture} For all $g\geq 2$, the relations of Theorem \ref{ttt} are equivalent to the Faber-Zagier relations. \end{Conjecture} In particular, despite significant effort, the relation in $R_{\mathrm{G}}^{12}({\mathcal{M}}_{24})$ which is missing in $R_{\mathsf{FZ}}^{12}({\mathcal{M}}_{24})$ has {\em not} been found via Theorem \ref{ttt}. Other geometric strategies have so far also failed to find the missing relation \cite{RW,Y}. \subsection{Proof of Theorem \ref{ttt}} \label{ppp} \subsubsection{The Chern roots of $\Omega_d$} Let $\psi_i \in A^1(\mathcal{C}^d_g, \mathbb{Q})$ be the first Chern class of the relative dualizing sheaf $\omega_\pi$ pulled back from the $i^{th}$ factor, $$\mathcal{C}^d_g \rightarrow \mathcal{C}_g\ .$$ For $i\neq j$, let $D_{ij} \in A^1(\mathcal{C}^d_g, \mathbb{Q})$ be the class of the diagonal $\mathcal{C}_g \subset \mathcal{C}^2_g$ pulled-back from the product of the $i^{th}$ and $j^{th}$ factors, $$\mathcal{C}^d_g \rightarrow \mathcal{C}^2_g\ .$$ Finally, let $$\Delta_i = D_{1,i} + \ldots + D_{i-1,i}\ \in A^1(\mathcal{C}_g^d,\mathbb{Q})\ $$ following the convention $\Delta_1=0$. The Chern roots of $\Omega_d$, \begin{eqnarray} c_t(\Omega_d) & = & \prod_{i=1}^d 1+(\psi_i - \Delta_i)t \label{fredd} \\ & = & \nonumber (1+\psi_1t) \cdot \big(1+(\psi_2-D_{12})t\big) \cdots \left(1+\Big(\psi_d -\sum_{i=1}^{d-1}D_{id}\Big)t\right) \end{eqnarray} are obtained by a simple induction, see \cite{Faber}. We may expand the right side of \eqref{fredd} fully. The resulting expression is a polynomial in the $d+ \binom{d}{2}$ variables. $${\psi}_1,\ldots, {\psi}_d, -D_{12},-D_{13}, \ldots,- D_{d-1,d}\ .$$ The sign on the diagonal variables is chosen because of the self-intersection formula $$(-D_{ij})^2= \psi_i(-D_{ij})=\psi_j(-D_{ij})\ . $$ Let $M_r^d$ denote the coefficient in degree $r$, $$c_t( \Omega_d) =\sum_{r=0}^\infty M_r^d({\psi}_i,- D_{ij}) \ t^r.$$ \begin{Lemma} \label{gcd2} After setting all the variables to 1, $$\sum_{r=0}^\infty M_r^d({\psi}_i=1,-D_{ij}=1) \ t^r \ = \ \prod_{i=1}^d (1+it).$$ \end{Lemma} \begin{proof} The results follows immediately from the Chern roots \eqref{fredd}. \end{proof} Lemma \ref{gcd2} may be viewed counting the number of terms in the expansion of the total Chern class $c_t(\Omega_d)$. \subsubsection{Connected counts}\label{cc12} A monomial in the diagonal variables \begin{equation} \label{gqq6} D_{12},D_{13}, \ldots,D_{d-1,d} \end{equation} determines a set partition of $\{1, \ldots, d\}$ by the diagonal associations. For example, the monomial $3D_{12}^2D_{1,3} D_{56}^3$ determines the set partition $$\{1,2,3\} \ \cup \ \{4\}\ \cup \ \{5,6\}$$ in the $d=6$ case. A monomial in the variables \eqref{gqq6} is {\em connected} if the corresponding set partition consists of a single part with $d$ elements. A monomial in the variables \begin{equation}\label{gffvv} {\psi}_1,\ldots, {\psi}_d, -D_{12},-D_{13}, \ldots, -D_{d-1,d}\ \end{equation} is connected if the corresponding monomial in the diagonal variables obtained by setting all ${\psi}_i=1$ is connected. Let $S^d_r$ be the summand of the evaluation $M^d_r({\psi}_i=1, -D_{ij}=1)$ consisting of the contributions of only the connected monomials. \begin{Lemma} \label{llgg} We have $$\sum_{d=1}^\infty \sum_{r=0}^d S_r^d\ t^r \frac{x^d}{d!} = \log\left( 1+\sum_{d=1}^\infty \prod_{i=1}^d (1+it) \frac{x^d}{d!} \right)\ . $$ \end{Lemma} \begin{proof} By a standard application of Wick's formula, the connected and disconnected counts are related by exponentiation, $$\exp\left(\sum_{d=1}^\infty \sum_{r=0}^d S_r^d \ t^r\frac{x^d}{d!}\right) = 1+ \sum_{d=1}^\infty \sum_{r=0}^\infty M_r^d(\psi_i=1, -D_{ij}=1) \ t^r\frac{x^d}{d!} \ .$$ The right side is then evaluated by Lemma \ref{gcd2}. \end{proof} Since a connected monomial in the variables \eqref{gffvv} must have at least $d-1$ factors of the variables $-D_{ij}$, we see $S^d_r =0$ if $r<d-1$. Using the self-intersection formulas, we obtain \begin{equation} \sum_{d=1}^ \infty \sum_{r=0}^d \pi^d_*\big(c_r(\Omega_d)\big)\ t^r\frac{x^d}{d!} = \exp\left(\sum_{d=1}^\infty \sum_{r=0}^d S_r^d (-1)^{d-1}\kappa_{r-d}\ t^r\frac{x^d}{d!}\right) \ . \end{equation} To account for the alternating factor $(-1)^{d-1}$ and the $\kappa$ subscript, we define the coefficients $C^d_r$ by $$\sum_{d=1}^\infty \sum_{r\geq -1}^d C_r^d\ t^r \frac{x^d}{d!} = \log\left( 1+\sum_{d=1}^\infty \prod_{i=1}^d (1+it) \frac{(-1)^d}{t^d} \frac{x^d}{d!} \right)\ . $$ The vanishing $S^d_{r<d-1}=0$ implies the vanishing $C^d_{r<-1}=0$. The formula for the total Chern class of the Hodge bundle $\mathbb{E}$ on $\mathcal{M}_g$ follows immediately from Mumford's Grothendieck-Riemann-Roch calculation \cite{M}, $$c_t(\mathbb{E}) = \sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1}\ .$$ Putting the above results together yields the following formula: \begin{multline*} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\big(c_r(Q_{d-g})\big)\ t^{r-d}\frac{x^d}{d!} = \\ \exp\left(-\sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1} -\sum_{d=1}^\infty \sum_{r\geq -1} C_r^d\ \kappa_r t^r \frac{x^d}{d!} \right) \ . \end{multline*} \subsubsection{Cutting} For $d>2g-2$ and $r>d-g$, we have the vanishing $$c_r(Q_{d-g})=0 \in A^r(\mathcal{C}^d_g, \mathbb{Q})\ .$$ Before pushing-forward via $\pi^d$, we will cut $c_r(Q_{d-g})$ with products of classes in $A^*(\mathcal{C}^d_g,\mathbb{Q})$. With the correct choice of cutting classes, we will obtain the relations of Theorem \ref{ttt}. Let $(a,b)$ be a pair of integers satisfying $a\geq 0$ and $b\geq 1$. We define the cutting class \begin{equation} \phi[a,b]= (-1)^{b-1}\sum_{|I|=b} \psi_I^a D_I \label{kddc3} \end{equation} where $I\subset \{1, \ldots, d\}$ is subset of order $b$, $D_I\in A^{b-1}(\mathcal{C}^d_g,\mathbb{Q})$ is the class of the corresponding small diagonal, and $\psi_I$ is the cotangent line at the point indexed by $I$. The class $\psi_I$ is well-defined on the small diagonal indexed by $I$. The degree of $\phi[a,b]$ is $a+b-1$. The number of terms on the right side of \eqref{kddc3} is a degree $b$ polynomial in $d$, $$\binom{d}{b} = \frac{d^b}{b!} + \ldots +(-1)^{b-1}\frac{d}{b}\ $$ with no constant term. The sign $(-1)^{b-1}$ in definition \eqref{kddc3} is chosen to match the sign conventions of the Wick analysis in Section \ref{cc12}. For example, $$\phi[0,2] = \sum_{i< j} (-D_{ij})\ , \ \ \ \ \phi[0,3]= \sum_{i<j<k} (-D_{ij})(-D_{jk}) .$$ The {\em number of terms} means the evaluation at $\psi_I=1$ and $-D_{ij}=-1$. A better choice of cutting class is obtained by the following observation. For every pair of integers $(i,j)$ with $i\geq 1$ and $j\geq i-1$, we can find a unique linear combination $$\Phi[i,j] = \sum_{a+b-1=j} \lambda_{a,b} \cdot \phi[a,b] , \ \ \ \lambda_{a,b}\in \mathbb{Q}$$ for which the evaluation of $\Phi[i,j]$ at $\psi_I=1$ and $-D_{ij}=-1$ is $d^i$. By definition, $\Phi[i,j]$ is of pure degree $j$. \subsubsection{Full Wick form} We repeat the Wick analysis of Section \ref{cc12} for the Chern class of $Q_{d-g}$ cut by the classes $\Phi[i,j]$ in order to write a formula for \begin{equation*} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\left( \exp\Big(\sum_{i,j} z_{i,j}t^j \Phi[i,j] \Big) \cdot c_r(Q_{d-g}) t^r\right)\ \frac{1}{t^d}\frac{x^d}{d!} \end{equation*} where the sum in the argument of the exponential is over all $i\geq 1$ and $j\geq i-1$. The variable set $\mathbf{z}$ introduced in Section \ref{wickf} appears here. Since $\Phi[i,j]$ yields $d^i$ after evaluation at $\psi_I=1$ and $-D_{ij}=-1$ and is of pure degree $j$, we conclude \begin{equation}\label{mssx} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\left( \exp\Big(\sum_{i,j} z_{i,j}t^j \Phi[i,j] \Big) \cdot c_r(Q_{d-g}) t^r\right)\ \frac{1}{t^d}\frac{x^d}{d!} = \exp(-\gamma^{\text{\tiny{{\sf F}}}})\ . \end{equation} Let $d>2g-2$. Since $c_s(Q_{d-g})=0$ for $s>d-g$, the $t^rx^d\mathbf{z}^\sigma$ coefficient of \eqref{mssx} vanishes if $$r+d - |\sigma| > d-g$$ which is equivalent to $r> -g + |\sigma|$. The proof of Theorem \ref{ttt} is complete. \qed \section{Stable quotients} \label{stq} \subsection{Stability} Our proof of the Faber-Zagier relations in $R^*(M_{g})$ will be obtained from the virtual geometry of the moduli space of stable quotients. We start by reviewing the basic definitions and results of \cite{MOP}. Let $C$ be a curve which is reduced and connected and has at worst nodal singularities. We require here only unpointed curves. See \cite{MOP} for the definitions in the pointed case. Let $q$ be a quotient of the rank $N$ trivial bundle $C$, \begin{equation*} {\mathbb C}^N \otimes {\mathcal O}_C \stackrel{q}{\rightarrow} Q \rightarrow 0. \end{equation*} If the quotient subsheaf $Q$ is locally free at the nodes and markings of $C$, then $q$ is a {\em quasi-stable quotient}. Quasi-stability of $q$ implies the associated kernel, \begin{equation*} 0 \rightarrow S \rightarrow {\mathbb C}^N \otimes {\mathcal O}_C \stackrel{q}{\rightarrow} Q \rightarrow 0, \end{equation*} is a locally free sheaf on $C$. Let $r$ denote the rank of $S$. Let $C$ be a curve equipped with a quasi-stable quotient $q$. The data $(C,q)$ determine a {\em stable quotient} if the $\mathbb{Q}$-line bundle \begin{equation}\label{aam} \omega_C \otimes (\wedge^{r} S^*)^{\otimes \epsilon} \end{equation} is ample on $C$ for every strictly positive $\epsilon\in \mathbb{Q}$. Quotient stability implies $2g-2 \geq 0$. Viewed in concrete terms, no amount of positivity of $S^*$ can stabilize a genus 0 component $$\PP^1\stackrel{\sim}{=}P \subset C$$ unless $P$ contains at least 2 nodes or markings. If $P$ contains exactly 2 nodes or markings, then $S^*$ {\em must} have positive degree. A stable quotient $(C,q)$ yields a rational map from the underlying curve $C$ to the Grassmannian $\mathbb{G}(r,N)$. We will only require the ${\mathbb{G}}(1,2)=\PP^1$ case for the proof Theorem \ref{dddd}. \subsection{Isomorphism} Let $C$ be a curve. Two quasi-stable quotients \begin{equation}\label{fpp22} {\mathbb C}^N \otimes {\mathcal O}_C \stackrel{q}{\rightarrow} Q \rightarrow 0,\ \ \ {\mathbb C}^N \otimes {\mathcal O}_C \stackrel{q'}{\rightarrow} Q' \rightarrow 0 \end{equation} on $C$ are {\em strongly isomorphic} if the associated kernels $$S,S'\subset {\mathbb C}^N \otimes {\mathcal O}_C$$ are equal. An {\em isomorphism} of quasi-stable quotients $$\phi:(C,q)\rightarrow (C',q') $$ is an isomorphism of curves $$\phi: C \stackrel{\sim}{\rightarrow} C'$$ such that the quotients $q$ and $\phi^*(q')$ are strongly isomorphic. Quasi-stable quotients \eqref{fpp22} on the same curve $C$ may be isomorphic without being strongly isomorphic. The following result is proven in \cite{MOP} by Quot scheme methods from the perspective of geometry relative to a divisor. \begin{Theorem} The moduli space of stable quotients $\overline{Q}_{g}({\mathbb{G}}(r,N),d)$ parameterizing the data $$(C,\ 0\rightarrow S \rightarrow {\mathbb C}^N\otimes {\mathcal O}_C \stackrel{q}{\rightarrow} Q \rightarrow 0),$$ with {\em rank}$(S)=r$ and {\em deg}$(S)=-d$, is a separated and proper Deligne-Mumford stack of finite type over ${\mathbb C}$. \end{Theorem} \subsection{Structures}\label{strrr} Over the moduli space of stable quotients, there is a universal curve \begin{equation}\label{ggtt} \pi: U \rightarrow \overline{Q}_{g}({\mathbb{G}}(r,N),d) \end{equation} with a universal quotient $$0 \rightarrow S_U \rightarrow {\mathbb C}^N \otimes {\mathcal O}_U \stackrel{q_U}{\rightarrow} Q_U \rightarrow 0.$$ The subsheaf $S_U$ is locally free on $U$ because of the stability condition. The moduli space $\overline{Q}_{g}({\mathbb{G}}(r,N),d)$ is equipped with two basic types of maps. If $2g-2>0$, then the stabilization of $C$ determines a map $$\nu:\overline{Q}_{g}({\mathbb{G}}(r,N),d) \rightarrow \overline{M}_{g}$$ by forgetting the quotient. The general linear group $\mathbf{GL}_N({\mathbb C})$ acts on $\overline{Q}_{g}({\mathbb{G}}(r,N),d)$ via the standard action on ${\mathbb C}^N \otimes {\mathcal O}_C$. The structures $\pi$, $q_U$, $\nu$ and the evaluations maps are all $\mathbf{GL}_N({\mathbb C})$-equivariant. \subsection{Obstruction theory} The moduli of stable quotients maps to the Artin stack of pointed domain curves $$\nu^A: \overline{Q}_{g}({\mathbb{G}}(r,N),d) \rightarrow {\mathcal{M}}_{g}.$$ The moduli of stable quotients with fixed underlying curve $[C] \in {\mathcal{M}}_{g}$ is simply an open set of the Quot scheme of $C$. The following result of \cite[Section 3.2]{MOP} is obtained from the standard deformation theory of the Quot scheme. \begin{Theorem}\label{htr} The deformation theory of the Quot scheme determines a 2-term obstruction theory on the moduli space $\overline{Q}_{g}({\mathbb{G}}(r,N),d)$ relative to $\nu^A$ given by ${{RHom}}(S,Q)$. \end{Theorem} More concretely, for the stable quotient, \begin{equation*} 0 \rightarrow S \rightarrow {\mathbb C}^N \otimes {\mathcal O}_C \stackrel{q}{\rightarrow} Q \rightarrow 0, \end{equation*} the deformation and obstruction spaces relative to $\nu^A$ are $\text{Hom}(S,Q)$ and $\text{Ext}^1(S,Q)$ respectively. Since $S$ is locally free, the higher obstructions $$\text{Ext}^{k}(S,Q)= H^{k}(C,S^*\otimes Q) = 0, \ \ \ k>1$$ vanish since $C$ is a curve. An absolute 2-term obstruction theory on the moduli space $\overline{Q}_{g}({\mathbb{G}}(r,N),d)$ is obtained from Theorem \ref{htr} and the smoothness of $\mathcal{M}_{g}$, see \cite{Beh,BF,GP}. The analogue of Theorem \ref{htr} for the Quot scheme of a {\it fixed} nonsingular curve was observed in \cite {MO}. The $\mathbf{GL}_N({\mathbb C})$-action lifts to the obstruction theory, and the resulting virtual class is defined in $\mathbf{GL}_N({\mathbb C})$-equivariant cycle theory, $$[\overline{Q}_{g}({\mathbb{G}}(r,N),d)]^{vir} \in A_*^{\mathbf{GL}_N({\mathbb C})} (\overline{Q}_{g}({\mathbb{G}}(r,N),d)).$$ For the construction of the Faber-Zagier relation, we are mainly interested in the open stable quotient space $$\nu: Q_g(\mathbf{P}^1,d) \longrightarrow {\mathcal{M}}_g$$ which is simply the corresponding relative Hilbert scheme. However, we will require the full stable quotient space $\overline{Q}_g(\mathbf{P}^1,d)$ to prove the Faber-Zagier relations can be completed over ${\mathcal{M}}_g$ with tautological boundary terms. \section{Stable quotients relations} \label{MOP} \subsection{First statement} \label{fsec} Our relations in the tautological ring $R^*({\mathcal{M}}_g)$ obtained from the moduli of stable quotients are based on the function \begin{equation}\label{jsjs} \Phi(t,x) = \sum_{d=0}^\infty \prod_{i=1}^d \frac{1}{1-it} \ \frac {(-1)^d}{d!} \frac{x^d}{t^{d}} \ . \end{equation} Define the coefficients $\widetilde{C}^d_{r}$ by the logarithm, $$\log(\Phi)= \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^d_{r}\ t^r \frac{x^d}{d!} \ .$$ Again, by an application of Wick's formula in Section \ref{pop}, the $t$ dependence has at most a simple pole. Let \begin{equation}\label{f444} \widetilde{\gamma}= \sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1} + \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^d_r \kappa_r t^r \frac{x^d}{d!}\ . \end{equation} Denote the $t^rx^d$ coefficient of $\exp(-\widetilde{\gamma})$ by $$\big[ \exp(-\widetilde{\gamma}) \big]_{t^rx^d} \in \mathbb{Q}[\kappa_{-1}, \kappa_0,\kappa_1, \kappa_2, \ldots] \ .$$ In fact, $[ \exp(-\widetilde{\gamma})]_{t^rx^d}$ is homogeneous of degree $r$ in the $\kappa$ classes. The first form of the tautological relations obtained from the moduli of stable quotients is given by the following result. \begin{Proposition} \label{vtw} In $R^r({\mathcal{M}}_g)$, the relation \begin{equation*} \big[ \exp(-\widetilde{\gamma}) \big]_{t^rx^d} =0 \end{equation*} holds when $g-2d-1< r$ and $g\equiv r+1 \hspace{-5pt} \mod 2$. \end{Proposition} For fixed $r$ and $d$, if Proposition \ref{vtw} applies in genus $g$, then Proposition \ref{vtw} applies in genera $h=g-2\delta$ for all natural numbers $\delta\in \mathbb{N}$. The genus shifting mod 2 property is present also in the Faber-Zagier relations. \subsection{$K$-theory class $\mathbb{F}_d$} For genus $g \geq 2$, we consider as before $$\pi^d: \mathcal{C}^d_g \rightarrow {\mathcal{M}}_g\ , $$ the $d$-fold product of the universal curve over $M_g$. Given an element $$[C, {p}_1, \ldots,{p}_d] \in \mathcal{C}^d_g \ , $$ there is a canonically associated stable quotient \begin{equation}\label{jwq2} 0 \rightarrow {\mathcal O}_C(-\sum_{j=1}^d {p}_j) \rightarrow {\mathcal O}_C \rightarrow Q \rightarrow 0. \end{equation} Consider the universal curve $$\epsilon: U \rightarrow {\mathcal{C}}^d_{g}$$ with universal quotient sequence $$0 \rightarrow S_U \rightarrow {\mathcal O}_U \rightarrow Q_U \rightarrow 0$$ obtained from \eqref{jwq2}. Let $$\mathbb{F}_d= -R\epsilon_*(S^*_U) \in K(\mathcal{C}^d_g)$$ be the class in $K$-theory. For example, $$\mathbb{F}_0 = \mathbb{E}^*-{\mathbb C}$$ is the dual of the Hodge bundle minus a rank 1 trivial bundle. By Riemann-Roch, the rank of $\mathbb{F}_d$ is $${r}_g(d)=g-d-1.$$ However, $\mathbb{F}_d$ is not always represented by a bundle. By the derivation of \cite[Section 4.6]{MOP}, \begin{equation}\label{laloo1} \mathbb{F}_d = \mathbb{E}^*- \mathbb{B}_d - {\mathbb C}, \end{equation} where $\mathbb{B}_d$ has fiber $H^0(C,{\mathcal O}_C(\sum_{j=1}^d {p}_j)|_{\sum_{j=1}^d {p}_j})$ over $[C, {p}_1,\ldots,{p}_d].$ The Chern classes of $\mathbb F_d$ can be easily computed. Recall the divisor $D_{i,j}$ where the markings $p_i$ and $p_j$ coincide. Set $$\Delta_i=D_{1,i}+\ldots+D_{i-1,i},$$ with the convention $\Delta_1=0.$ Over $[C, p_1, \ldots, p_d],$ the virtual bundle $\mathbb F_d$ is the formal difference $$H^1(\mathcal O_C( p_1+\ldots+ p_d)) -H^0(\mathcal O_C(p_1+\ldots+p_d)).$$ Taking the cohomology of the exact sequence $$0\to \mathcal O_C( p_1+\ldots+ p_{d-1})\to \mathcal O_C(p_1+\ldots+p_d)\to \mathcal O_C(p_1+\ldots+p_d)|_{\widehat p_d}\to 0,$$ we find $$c(\mathbb F_d)= \frac{c(\mathbb F_{d-1})}{1+\Delta_d- \psi_d}.$$ Inductively, we obtain \begin{equation*}c(\mathbb F_d)= \frac{c(\mathbb E^*)}{(1+\Delta_1-{\psi}_1)\cdots (1+\Delta_d-{\psi}_d)}.\end{equation*} Equivalently, we have \begin{equation}\label{laloo2}c(-\mathbb B_d)= \frac{1}{(1+\Delta_1-{\psi}_1)\cdots (1+\Delta_d-{\psi}_d)}.\end{equation} \subsection{Proof of Proposition \ref{vtw}} \label{pop} Consider the proper morphism $$\nu: Q_{g}(\PP^1,d) \rightarrow M_g.$$ Certainly the class \begin{equation}\label{p236} \nu_*\left( 0^c \cap [Q_g(\PP^1,d)]^{vir} \right) \in A^*({\mathcal{M}}_g,\mathbb{Q}), \end{equation} where $0$ is the first Chern class of the trivial bundle, vanishes if $c>0$. Proposition \ref{vtw} is proven by calculating \eqref{p236} by localization. We will find Proposition \ref{vtw} is a subset of the much richer family of relations of Theorem \ref{mmnn} of Section \ref{exrel}. Let the torus ${\mathbb C}^*$ act on a 2-dimensional vector space $V\stackrel{\sim}{=}{\mathbb C}^2$ with diagonal weights $[0,1]$. The ${\mathbb C}^*$-action lifts canonically to $\PP(V)$ and $Q_{g}(\PP(V),d)$. We lift the ${\mathbb C}^*$-action to a rank 1 trivial bundle on $Q_g(\PP(V), d)$ by specifying fiber weight $1$. The choices determine a ${\mathbb C}^*$-lift of the class $$ 0^c \cap [Q_g(\PP(V),d)]^{vir}\in A_{2d+2g-2-c}( Q_g(\PP(V),d),\mathbb{Q}).$$ The push-forward \eqref{p236} is determined by the virtual localization formula \cite{GP}. There are only two ${\mathbb C}^*$-fixed loci. The first corresponds to a vertex lying over $0\in \PP(V)$. The locus is isomorphic to $$\mathcal{C}^d_g\ /\ \mathbb{S}_d$$ and the associated subsheaf \eqref{jwq2} lies in the first factor of $V \otimes {\mathcal O}_C$ when considered as a stable quotient in the moduli space $Q_g(\PP(V),d)$. Similarly, the second fixed locus corresponds to a vertex lying over $\infty\in \PP(V)$. The localization contribution of the first locus to \eqref{p236} is $$\frac{1}{d!}\pi^d_* \left( c_{g-d-1+c}(\mathbb{F}_d)\right)\ \ \ \ \text{where} \ \ \ \ \pi^d: \mathcal{C}^d_g \rightarrow {\mathcal{M}}_g\ . $$ Let $c_-(\mathbb{F}_d)$ denote the total Chern class of $\mathbb{F}_d$ evaluated at $-1$. The localization contribution of the second locus is $$\frac{(-1)^{g-d-1}}{d!}\pi^d_*\Big[ c_{-}(\mathbb{F}_d)\Big]^{g-d-1+c}$$ where $[\gamma]^k$ is the part of $\gamma$ in $A^k( \mathcal{C}^d_g,\mathbb{Q})$. Both localization contributions are found by straightforward expansion of the vertex formulas of \cite[Section 7.4.2]{MOP}. Summing the contributions yields \begin{multline*} \pi^d_*\Big( c_{g-d-1+c}(\mathbb{F}_d) + (-1)^{g-d-1} \Big[ c_{-}(\mathbb{F}_d) \Big] ^{g-d-1+c} \Big) = 0 \ \ \ \text{in }\ R^*({\mathcal{M}}_g)\ \end{multline*} for $c>0$. We obtain the following result. \begin{Lemma} \label{htht} For $c>0$ and $c\equiv 0 \mod 2$, \begin{equation*} \pi^d_*\Big( c_{g-d-1+c}(\mathbb{F}_d) \Big) = 0 \ \ \ \text{in }\ R^*({\mathcal{M}}_g)\ . \end{equation*} \end{Lemma} For $c>0$, the relation of Lemma \ref{htht} lies in $R^r({\mathcal{M}}_g)$ where $$r=g-2d-1+c\ .$$ Moreover, the relation is trivial unless \begin{equation} \label{mss3} g-d-1 \equiv g-d-1+c = r-d \ \mod 2\ . \end{equation} We may expand the right side of \eqref{laloo2} fully. The resulting expression is a polynomial in the $d+ \binom{d}{2}$ variables. $${\psi}_1,\ldots, {\psi}_d, -D_{12},-D_{13}, \ldots,- D_{d-1,d}\ .$$ Let $\widetilde{M}_r^d$ denote the coefficient in degree $r$, $$c_t( -\mathbb{B}_d) =\sum_{r=0}^\infty \widetilde{M}_r^d({\psi}_i,- D_{ij}) \ t^r.$$ Let $\widetilde{S}^d_r$ be the summand of the evaluation $\widetilde{M}^d_r({\psi}_i=1, -D_{ij}=1)$ consisting of the contributions of only the connected monomials. \begin{Lemma} \label{llggg} We have $$\sum_{d=1}^\infty \sum_{r=0}^\infty \widetilde{S}_r^d\ t^r \frac{x^d}{d!} = \log\left( 1+\sum_{d=1}^\infty \prod_{i=1}^d \frac{1}{1-it} \ \frac{x^d}{d!}\right)\ . $$ \end{Lemma} \begin{proof} As before, by Wick's formula, the connected and disconnected counts are related by exponentiation, $$\exp\left(\sum_{d=1}^\infty \sum_{r=0}^\infty \widetilde{S}_r^d \ t^r\frac{x^d}{d!}\right) = 1+ \sum_{d=1}^\infty \sum_{r=0}^\infty \widetilde{M}_r^d(\widehat{\psi}_i=1, -D_{ij}=1) \ t^r\frac{x^d}{d!} \ .$$ \end{proof} Since a connected monomial in the variables $\psi_i$ and $-D_{ij}$ must have at least $d-1$ factors of the variables $-D_{ij}$, we see $\widetilde{S}^d_r =0$ if $r<d-1$. Using the self-intersection formulas, we obtain \begin{equation} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\big(c_{r}(-\mathbb{B}_d)\big)\ t^r\frac{x^d}{d!} = \exp\left(\sum_{d=1}^\infty \sum_{r=0}^\infty \widetilde{S}_r^d (-1)^{d-1}\kappa_{r-d}\ t^r\frac{x^d}{d!}\right) \ . \end{equation} To account for the alternating factor $(-1)^{d-1}$ and the $\kappa$ subscript, we define the coefficients $\widetilde{C}^d_r$ by $$\sum_{d=1}^\infty \sum_{r\geq -1} \widetilde{C}_r^d\ t^r \frac{x^d}{d!} = \log\left( 1+\sum_{d=1}^\infty \prod_{i=1}^d \frac{1}{1-it}\ \frac{(-1)^d}{t^d} \frac{x^d}{d!} \right)\ . $$ The vanishing $\widetilde{S}^d_{r<d-1}=0$ implies the vanishing $\widetilde{C}^d_{r<-1}=0$. Again using Mumford's Grothendieck-Riemann-Roch calculation \cite{M}, $$c_t(\mathbb{E}^*) = -\sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1}\ .$$ Putting the above results together yields the following formula: \begin{multline*} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\big(c_r(\mathbb{F}_d)\big)\ t^{r-d}\frac{x^d}{d!} = \\ \exp\left(-\sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1} -\sum_{d=1}^\infty \sum_{r\geq -1} \widetilde{C}_r^d \kappa_r t^r \frac{x^d}{d!} \right) \ . \end{multline*} The restrictions on $g$, $d$, and $r$ in the statement of Proposition \ref{vtw} are obtained from \eqref{mss3}. \qed \subsection{Extended relations} \label{exrel} The universal curve $$\epsilon:U \rightarrow Q_{g}(\PP^1,d)$$ carries the basic divisor classes $${s} = c_1(S_U^*), \ \ \ \ \omega= c_1(\omega_\pi)$$ obtained from the universal subsheaf $S_U$ of the moduli of stable quotients and the $\epsilon$-relative dualizing sheaf. Following \cite[Proposition 5]{MOP}, we can obtain a much larger set of relations in the tautological ring of ${\mathcal{M}}_g$ by including factors of $\epsilon_*(s^{a_i}\omega^{b_i})$ in the integrand: \begin{equation*} \nu_*\left(\prod_{i=1}^n\epsilon_*(s^{a_i} \omega^{b_i}) \cdot 0^c \cap [Q_g(\PP^1,d)]^{vir} \right)= 0\ \ \text{in} \ A^*({\mathcal{M}}_g,\mathbb{Q}) \ \end{equation*} when $c>0$. We will study the associated relations where the $a_i$ are always $1$. The $b_i$ then form the parts of a partition $\sigma$. To state the relations we obtain, we start by extending the function $\widetilde{\gamma}$ of Section \ref{fsec}, \begin{eqnarray*} \gamma^{\text{\tiny{{\sf SQ}}}} &=& \sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1}\\ & & + \sum_{\sigma} \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^{d}_r \kappa_{r+|\sigma|}\ t^r \frac{x^d}{d!} \ \frac{d^{\ell(\sigma)} t^{|\sigma|} {\mathbf{p}}^{\sigma}} {|\text{Aut}(\sigma)|} \ . \end{eqnarray*} Let $\overline{\gamma}^{\, \text{\tiny{{\sf SQ}}}}$ be defined by a similar formula, \begin{eqnarray*} \overline{\gamma}^{\, \text{\tiny{{\sf SQ}}}} &=& \sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} (-t)^{2i-1}\\ & & + \sum_{\sigma} \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^{d}_r \kappa_{r+|\sigma|}\ (-t)^r \frac{x^d}{d!} \ \frac{d^{\ell(\sigma)} t^{|\sigma|} {\mathbf{p}}^{\sigma}} {|\text{Aut}(\sigma)|} \ . \end{eqnarray*} The sign of $t$ in $t^{|\sigma|}$ does not change in $\overline{\gamma}^{\, \text{\tiny{{\sf SQ}}}}$. The $\kappa_{-1}$ terms which appear will later be set to 0. The full system of relations are obtained from the coefficients of the functions \vspace{-10pt} $$\exp(-\gamma^{\text{\tiny{{\sf SQ}}}}), \ \ \ \ \exp(-\sum_{r=0}^\infty \kappa_r t^r p_{r+1})\cdot \exp( -\overline{\gamma}^{\, \text{\tiny{{\sf SQ}}}}) $$ \begin{Theorem} \label{mmnn} In $R^r({\mathcal{M}}_g)$, the relation $$\Big[ \exp(-\gamma^{\text{\tiny{{\sf SQ}}}}) \Big]_{t^rx^d\mathbf{p}^\sigma} = (-1)^g\Big[ \exp(-\sum_{r=0}^\infty \kappa_r t^r p_{r+1})\cdot \exp( -\overline{\gamma}^{\, \text{\tiny{{\sf SQ}}}}) \Big]_{t^rx^d\mathbf{p}^\sigma}$$ holds when $g-2d-1+|\sigma| < r$. \end{Theorem} Again, we see the genus shifting mod 2 property. If the relation holds in genus $g$, then the {\em same} relation holds in genera $h=g-2\delta$ for all natural numbers $\delta\in \mathbb{N}$. In case $\sigma=\emptyset$, Theorem \ref{mmnn} specializes to the relation \begin{eqnarray*} \Big[ \exp(-\widetilde{\gamma}(t,x)) \Big]_{t^rx^d} & = & (-1)^g\Big[ \exp( -\widetilde{\gamma}(-t,x)) \Big]_{t^rx^d} \\ & = & (-1)^{g+r} \Big[ \exp( -\widetilde{\gamma}(t,x)) \Big]_{t^rx^d}\ , \end{eqnarray*} nontrivial only if $g\equiv r+1$ mod 2. If the mod 2 condition holds, then we obtain the relations of Proposition \ref{vtw}. Consider the case $\sigma=(1)$. The left side of the relation is then $$ \Big[ \exp(-\widetilde{\gamma}(t,x))\cdot \left(-\sum_{d=1}^\infty \sum_{s=-1}^\infty \widetilde{C}^d_{s}\ \kappa_{s+1} t^{s+1} \frac{dx^d}{d!}\right) \Big]_{t^rx^d} \ . $$ The right side is $$(-1)^g\Big[ \exp(-\widetilde{\gamma}(-t,x))\cdot\left(-\kappa_0t^0+ \sum_{d=1}^\infty \sum_{s=-1}^\infty \widetilde{C}^d_{s}\ \kappa_{s+1} (-t)^{s+1} \frac{dx^d}{d!} \right) \Big]_{t^rx^d} \ . $$ If $g\equiv r+1$ mod 2, then the large terms cancel and we obtain $$-\kappa_0 \cdot \Big[ \exp(-\widetilde{\gamma}(t,x)) \Big]_{t^rx^d} =0 \ . $$ Since $\kappa_0=2g-2$ and $$(g-2d-1+1 < r) \ \ \implies\ \ (g-2d-1 < r),$$ we recover most (but not all) of the $\sigma=\emptyset$ equations. If $g\equiv r$ mod 2, then the resulting equation is \begin{equation*} \Big[ \exp(-\widetilde{\gamma}(t,x))\cdot \left(\kappa_0-2 \sum_{d=1}^\infty \sum_{s=-1}^\infty \widetilde{C}^d_{s}\ \kappa_{s+1} t^{s+1} \frac{dx^d}{d!}\right) \Big]_{t^rx^d}=0 \end{equation*} when $g-2d<r$. \subsection{Proof of Theorem \ref{mmnn}} \subsubsection{Partitions, differential operators, and logs.} \label{pdol} We will write partitions $\sigma$ as $(1^{n_1}2^{n_2}3^{n_3}\ldots)$ with $$\ell(\sigma)= \sum_{i} n_i \ \ \ \ \text{and} \ \ \ \ |\sigma|= \sum_i in_i\ .$$ The empty partition $\emptyset$ corresponding to $(1^{0}2^{0}3^{0}\ldots)$ is permitted. In all cases, we have $$|{\text{Aut}}(\sigma)|= n_1!n_2!n_3! \cdots \ .$$ In the infinite set of variables $\{ p_1, p_2, p_3, \ldots \}$, let $$\Phi^{\mathbf{p}}(t,x) = \sum_\sigma \sum_{d=0}^\infty \prod_{i=1}^d \frac{1}{1-it} \ \frac {(-1)^d}{d!} \frac{x^d}{t^{d}}\ \frac{d^{\ell(\sigma)} t^{|\sigma|} {\mathbf{p}}^{\sigma}} {|\text{Aut}(\sigma)|} \ ,$$ where the first sum is over all partitions $\sigma$. The summand corresponding to the empty partition equals $\Phi(t,x)$ defined in \eqref{jsjs}. The function $\Phi^{\mathbf{p}}$ is easily obtained from $\Phi$, $$\Phi^{\mathbf{p}}(t,x) = \exp\left( \sum_{i=1}^\infty p_it^i x\frac{d}{dx} \right) \ \Phi(t,x)\ . $$ Let $D$ denote the differential operator $$D= \sum_{i=1}^\infty p_it^i x\frac{d}{dx}\ .$$ Expanding the exponential of $D$, we obtain \begin{eqnarray} \Phi^{\mathbf{p}}& = & \Phi + D\Phi + \frac{1}{2} D^2\Phi+ \frac{1}{6} D^3 \Phi+\ldots \label{vfrr} \\ & =& \nonumber \Phi \left(1+\frac{D \Phi}{\Phi} + \frac{1}{2} \frac{D^2 \Phi}{\Phi} + \frac{1}{6}\frac{D^3\Phi}{\Phi}+ \ldots\right) \ . \end{eqnarray} Let $\gamma^* = \log (\Phi)$ be the logarithm, $$D\gamma^* = \frac{D\Phi}{\Phi}\ .$$ After applying the logarithm to \eqref{vfrr}, we see \begin{eqnarray*} \log(\Phi^{\mathbf{p}}) & =& \gamma^* +\log\left( 1+ D \gamma^* + \frac{1}{2}( D^2\gamma^* + (D\gamma^*)^2)+ \ ... \right) \\ & = & \gamma^* + D\gamma^* + \frac{1}{2} D^2 \gamma^* + \ldots \end{eqnarray*} where the dots stand for a universal expression in the $D^k\gamma^*$. In fact, a remarkable simplification occurs, $$\log(\Phi^{\mathbf{p}}) = \exp\left( \sum_{i=1}^\infty p_it^i x\frac{d}{dx} \right) \ \gamma^*\ .$$ The result follows from a general identity. \begin{Proposition} \label{vwwv3} If $f$ is a function of $x$, then $$\log\left(\exp\left(\lambda x\frac{d}{dx}\right) \ f \right) = \exp\left(\lambda x\frac{d}{dx}\right) \ \log(f)\ .$$ \end{Proposition} \begin{proof} A simple computation for monomials in $x$ shows $$\exp\left(\lambda x\frac{d}{dx}\right) \ x^k = (e^\lambda x)^k\ .$$ Hence, since the differential operator is additive, $$\exp\left(\lambda x\frac{d}{dx}\right) \ f(x) = f(e^\lambda x)\ .$$ The Proposition follows immediately. \end{proof} We apply Proposition \ref{vwwv3} to $\log(\Phi^{\mathbf{p}})$. The coefficients of the logarithm may be written as \begin{eqnarray*} \log(\Phi^{\mathbf{p}}) & = & \sum_\sigma \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^d_{r}(\sigma) \ t^r \frac{x^d}{d!} {{\mathbf{p}}^{\sigma}} \\ & = & \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^d_{r}\ t^r \frac{x^d}{d!} \exp\left( \sum_{i=1}^\infty dp_i t^i\right)\\ & = & \sum_{\sigma} \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^d_{r}\ t^r \frac{x^d}{d!} \ \frac{d^{\ell(\sigma)} t^{|\sigma|} {\mathbf{p}}^{\sigma}} {|\text{Aut}(\sigma)|} \ . \end{eqnarray*} We have expressed the coefficients $\widetilde{C}^d_{r}(\sigma)$ of $\log(\Phi^{\mathbf{p}})$ solely in terms of the coefficients $\widetilde{C}^d_{r}$ of $\log(\Phi)$. \subsubsection{Cutting classes} Let $\theta_i\in A^1(U,\mathbb{Q})$ be the class of the $i^{th}$ section of the universal curve \begin{equation}\label{gbbt} \epsilon: U \rightarrow \mathcal{C}_{g}^d \end{equation} The class $s=c_1(S_U^*)$ on the universal curve over $Q_g(\mathbf{P}^1,d)$ restricted to the ${\mathbb C}^*$-fixed locus $\mathcal{C}^d_g / \mathbb{S}_d$ and pulled-back to \eqref{gbbt} yields $$s=\theta_1 + \ldots +\theta_d\ \in A^1(U,\mathbb{Q}).$$ We calculate \begin{equation}\label{cutt3} \epsilon_*(s\ \omega^b) = {\psi}^b_1 +\ldots +{\psi}^b_d \ \ \in A^b(\mathcal{C}_g^d,\mathbb{Q})\ . \end{equation} \subsubsection{Wick form} We repeat the Wick analysis of Section \ref{pop} for the vanishings \begin{equation*} \nu_*\left(\prod_{i=1}^\ell\epsilon_*(s \omega^{b_i}) \cdot 0^c \cap [Q_g(\PP^1,d)]^{vir} \right) = 0 \ \ {\text {in}}\ A^*({\mathcal{M}}_g,\mathbb{Q}) \end{equation*} when $c>0$. We start by writing a formula for \begin{equation*} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\left( \exp\Big(\sum_{i=1}^\infty p_it^i \epsilon_*(s\omega^i) \Big) \cdot c_r(\mathbb{F}_d) t^r\right)\ \frac{1}{t^d}\frac{x^d}{d!} \ . \end{equation*} Applying the Wick formula to equation \eqref{cutt3} for the cutting classes, we see \begin{equation}\label{mssx2} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\left( \exp\Big(\sum_{i=1}^\infty p_it^i \epsilon_*(s\omega^i) \Big) \cdot c_r(\mathbb{F}_d) t^r\right)\ \frac{1}{t^d}\frac{x^d}{d!} = \exp(-\widetilde{\gamma}^{\, \text{\tiny{{\sf SQ}}}}) \end{equation} where $\widetilde{\gamma}^{\, \text{\tiny{{\sf SQ}}}}$ is defined by \begin{equation*} \widetilde{\gamma}^{\, \text{\tiny{{\sf SQ}}}} = \sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} \kappa_{2i-1} t^{2i-1} + \sum_{\sigma} \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^{d}_r(\sigma) \kappa_{r}\ t^r \frac{x^d}{d!} \ {{\mathbf{p}}^{\sigma}} \ . \end{equation*} We follow here the notation of Section \ref{pdol}, $$\Phi^{\mathbf{p}}(t,x) = \sum_\sigma \sum_{d=0}^\infty \prod_{i=1}^d \frac{1}{1-it} \ \frac {(-1)^d}{d!} \frac{x^d}{t^{d}}\ \frac{d^{\ell(\sigma)} t^{|\sigma|} {\mathbf{p}}^{\sigma}} {|\text{Aut}(\sigma)|} \ ,$$ $$ \log(\Phi^{\mathbf{p}}) = \sum_\sigma \sum_{d=1}^\infty \sum_{r=-1}^\infty \widetilde{C}^d_{r}(\sigma) \ t^r \frac{x^d}{d!} {{\mathbf{p}}^{\sigma}} \ .$$ In the Wick analysis, the class $\epsilon_*(s \omega^b)$ simply acts as $dt^b$. Using the expression for the coefficents $\widetilde{C}^d_r(\sigma)$ in terms of $\widetilde{C}^d_r$ derived in Section \ref{pdol}, we obtain the following result from \eqref{mssx2}. \begin{Proposition} \label{mmnn3} We have \begin{equation*} \sum_{d=1}^ \infty \sum_{r\geq 0} \pi^d_*\left( \exp\Big(\sum_{i=1}^\infty p_it^i \epsilon_*(s\omega^i) \Big) \cdot c_r(\mathbb{F}_d) t^r\right)\ \frac{1}{t^d}\frac{x^d}{d!} = \exp(-{\gamma}^{\text{\tiny{{\sf SQ}}}}) \ . \end{equation*} \end{Proposition} \subsubsection{Geometric construction} We apply ${\mathbb C}^*$-localization on $Q_g(\mathbf{P}^1,d)$ to the geometric vanishing \begin{equation} \label{frad} \nu_*\left(\prod_{i=1}^\ell\epsilon_*(s \omega^{b_i}) \cdot 0^c \cap [Q_g(\PP^1,d)]^{vir} \right)= 0\ \ \text{in} \ A^*({\mathcal{M}}_g,\mathbb{Q}) \ \end{equation} when $c>0$. The result is the relation \begin{multline} \label{gred} \pi_*\Big( \prod_{i=1}^\ell \epsilon_*(s \omega^{b_i})\cdot c_{g-d-1+c}(\mathbb{F}_d) + \\ (-1)^{g-d-1} \Big[ \prod_{i=1}^\ell \epsilon_*\left((s-1)\omega^{b_i}\right) \cdot c_{-}(\mathbb{F}_d) \Big] ^{g-d-1+\sum_ib_i+c} \Big) = 0 \end{multline} in $R^*({\mathcal{M}}_g)$. After applying the Wick formula of Proposition \ref{mmnn3}, we immediately obtain Theorem \ref{mmnn}. The first summand in \eqref{gred} yields the left side $$\Big[ \exp(-\gamma^{\text{\tiny{{\sf SQ}}}}) \Big]_{t^rx^d\mathbf{p}^\sigma} $$ of the relation of Theorem \ref{mmnn}. The second summand produces the right side \begin{equation}\label{kedd} (-1)^g\Big[ \exp(-\sum_{r=0}^\infty \kappa_r t^r p_{r+1})\cdot \exp( -\widehat{\gamma}^{\, \text{\tiny{{\sf SQ}}}}) \Big]_{t^rx^d\mathbf{p}^\sigma}\ . \end{equation} Recall the localization of the virtual class over $\infty \in \mathbf{P}^1$ is $$\frac{(-1)^{g-d-1}}{d!}\pi^d_*\Big[ c_{-}(\mathbb{F}_d)\Big]^{g-d-1+c}\ .$$ Of the sign prefactor $(-1)^{g-d-1}$, \begin{enumerate} \item[$\bullet$] $(-1)^{-1}$ is used to move the term to the right side, \item[$\bullet$] $(-1)^{-d}$ is absorbed in the $(-t)$ of the definition of $\widehat{\gamma}^{\, \text{\tiny{{\sf SQ}}}}$, \item[$\bullet$] $(-1)^g$ remains in \eqref{kedd}. \end{enumerate} The $-1$ of $s-1$ produces the the factor $\exp(-\sum_{r=0}^\infty \kappa_r t^r p_{r+1})$. Finally, a simple dimension calculation (remembering $c>0$) implies the validity of the relation when $g-2d-1+|\sigma| < r$. \qed \section{Analysis of the relations} \label{LLL} \subsection{Expanded form} \label{exf} Let $\sigma=(1^{a_1}2^{a_2}3^{a_3} \ldots)$ be a partition of length $\ell(\sigma)$ and size $|\sigma|$. We can directly write the corresponding tautological relation in $R^r({\mathcal{M}}_g)$ obtained from Theorem \ref{mmnn}. A {\em subpartition} $\sigma'\subset \sigma$ is obtained by selecting a nontrivial subset of the parts of $\sigma$. A {\em division} of $\sigma$ is a disjoint union \begin{equation}\label{rrgg} \sigma = \sigma^{(1)} \cup \sigma^{(2)} \cup \sigma^{(3)}\ldots \end{equation} of subpartitions which exhausts $\sigma$. The subpartitions in \eqref{rrgg} are unordered. Let $\mathcal{S}(\sigma)$ be the set of divisions of $\sigma$. For example, \begin{eqnarray*} \mathcal{S}(1^12^1) &=& \{ \ (1^12^1),\ (1^1) \cup (2^1)\ \}\ , \\ \mathcal{S}(1^3) &=& \{\ (1^3), \ (1^2)\cup (1^1) \ \}\ . \end{eqnarray*} We will use the notation $\sigma^\bullet$ to denote a division of $\sigma$ with subpartitions $\sigma^{(i)}$. Let $$m(\sigma^\bullet) = \frac{1}{|\text{Aut}(\sigma^\bullet)|} \frac{|\text{Aut}(\sigma)|}{\prod_{i=1}^{\ell(\sigma^\bullet)} |\text{Aut}(\sigma^{(i)})|}.$$ Here, $\text{Aut}(\sigma^\bullet)$ is the group permuting equal subpartitions. The factor $m(\sigma^\bullet)$ may be interpreted as counting the number of different ways the disjoint union can be made. To write explicitly the $\mathbf{p}^\sigma$ coefficient of $\exp(\gamma^{\text{\tiny{{\sf SQ}}}})$, we introduce the functions $$F_{n,m}(t,x) = -\sum_{d=1}^\infty \sum_{s=-1}^\infty \widetilde{C}^d_{s}\ \kappa_{s+m} t^{s+m} \frac{d^n x^d}{d!}$$ for $n,m \geq 1$. Then, \begin{multline*} |\text{Aut}(\sigma)| \cdot \Big[ \exp(-\gamma^{\text{\tiny{{\sf SQ}}}}) \Big]_{t^rx^d\mathbf{p}^\sigma} =\\ \Big[ \exp(-\widetilde{\gamma}(t,x)) \cdot \left( \sum_{\sigma^\bullet\in \mathcal{S}(\sigma)} m(\sigma^\bullet) \prod_{i=1}^{\ell(\sigma^\bullet)} F_{\ell(\sigma^{(i)}), |\sigma^{(i)}|} \right) \Big]_{t^rx^d} \ . \end{multline*} Let $\sigma^{*,\bullet}$ be a division of $\sigma$ with a marked subpartition, \begin{equation}\label{rrggg} \sigma = \sigma^* \cup \sigma^{(1)} \cup \sigma^{(2)} \cup \sigma^{(3)}\ldots, \end{equation} labelled by the superscript $*$. The marked subpartition is permitted to be empty. Let $\mathcal{S}^*(\sigma)$ denote the set of marked divisions of $\sigma$. Let $$m(\sigma^{*,\bullet}) = \frac{1}{|\text{Aut}(\sigma^\bullet)|} \frac{|\text{Aut}(\sigma)|}{|\text{Aut}(\sigma^*)| \prod_{i=1}^{\ell(\sigma^{*,\bullet})} |\text{Aut}(\sigma^{(i)})|}.$$ The length $\ell(\sigma^{*,\bullet})$ is the number of unmarked subpartitions. Then, $|\text{Aut}(\sigma)|$ times the right side of Theorem \ref{mmnn} may be written as \begin{multline*} (-1)^{g+|\sigma|} |\text{Aut}(\sigma)| \cdot \Big[ \exp(-\widetilde{\gamma}(-t,x)) \cdot\\ \left( \sum_{\sigma^{*,\bullet}\in \mathcal{S}^*(\sigma)} m(\sigma^{*,\bullet}) \prod_{j=1}^{\ell(\sigma^*)} \kappa_{\sigma^*_{j}-1} (-t)^{\sigma^*_j-1} \prod_{i=1}^{\ell(\sigma^{*,\bullet})} F_{\ell(\sigma^{(i)}), |\sigma^{(i)}|} (-t,x)\right) \Big]_{t^rx^d} \end{multline*} To write Theorem \ref{mmnn} in the simplest form, the following definition using the Kronecker $\delta$ is useful, $$m^\pm(\sigma^{*,\bullet}) = (1\pm\delta_{0,|\sigma^*|}) \cdot m(\sigma^{*,\bullet}).$$ There are two cases. If $g\equiv r +|\sigma|$ mod 2, then Theorem 3 is equivalent to the vanishing of \begin{equation*} {\small{|\text{Aut}(\sigma)|}}\Big[ \exp(-\widetilde{\gamma}) \cdot \left( \sum_{\sigma^{*,\bullet}\in \mathcal{S}^*(\sigma)} m^-(\sigma^{*,\bullet}) \prod_{j=1}^{\ell(\sigma^*)} \kappa_{\sigma^*_{j}-1} t^{\sigma^*_j-1} \prod_{i=1}^{\ell(\sigma^{*,\bullet})} F_{\ell(\sigma^{(i)}), |\sigma^{(i)}|} \right) \Big]_{t^rx^d} . \end{equation*} If $g\equiv r +|\sigma|+1$ mod 2, then Theorem \ref{mmnn} is equivalent to the vanishing of \begin{equation*} {\small{|\text{Aut}(\sigma)|}} \Big[ \exp(-\widetilde{\gamma}) \cdot \left( \sum_{\sigma^{*,\bullet}\in \mathcal{S}^*(\sigma)} m^+(\sigma^{*,\bullet}) \prod_{j=1}^{\ell(\sigma^*)} \kappa_{\sigma^*_{j}-1} t^{\sigma^*_j-1} \prod_{i=1}^{\ell(\sigma^{*,\bullet})} F_{\ell(\sigma^{(i)}), |\sigma^{(i)}|} \right) \Big]_{t^rx^d} . \end{equation*} In either case, the relations are valid in the ring $R^*({\mathcal{M}}_g)$ only if the condition $g-2d-1+|\sigma| < r$ holds. We denote the above relation corresponding to $g$, $r$, $d$, and $\sigma$ (and depending upon the parity of $g-r-|\sigma|$) by $$\mathsf{R}(g,r,d,\sigma)= 0$$ The $|\text{Aut}(\sigma)|$ prefactor is included in $\mathsf{R}(g,r,d,\sigma)$, but is only relevant when $\sigma$ has repeated parts. In case of repeated parts, the automorphism scaled normalization is more convenient. \subsection{Further examples} If $\sigma=(k)$ has a single part, then the two cases of Theorem \ref{mmnn} are the following. If $g\equiv r+k$ mod 2, we have $$\Big[ \exp(-\widetilde{\gamma})\cdot \kappa_{k-1}t^{k-1} \Big]_{t^r x^d}=0\ $$ which is a consequence of the $\sigma=\emptyset$ case. If $g\equiv r+k+1$ mod 2, we have $$\Big[ \exp(-\widetilde{\gamma})\cdot \left(\kappa_{k-1}t^{k-1} + 2 F_{1,k}\right) \Big]_{t^r x^d}=0$$ If $\sigma=(k_1k_2)$ has two distinct parts, then the two cases of Theorem \ref{mmnn} are as follows. If $g\equiv r+k_1+k_2$ mod 2, we have \begin{multline*}\Big[ \exp(-\widetilde{\gamma})\cdot \big( \kappa_{k_1-1}\kappa_{k_2-1}t^{k_1+k_2-2}\\ +\kappa_{k_1-1}t^{k_1-1} F_{1,k_2} +\kappa_{k_2-1} t^{k_2-1} F_{1,k_1}\big) \Big]_{t^r x^d}=0\ . \end{multline*} If $g\equiv r+k_1+k_2+1$ mod 2, we have \begin{multline*}\Big[ \exp(-\widetilde{\gamma})\cdot \big( \kappa_{k_1-1}\kappa_{k_2-1}t^{k_1+k_2-2} +\kappa_{k_1-1} t^{k_1-1}F_{1,k_2} \\+ \kappa_{k_2-1} t^{k_2-1}F_{1,k_1} + 2 F_{2,k_1+k_2} + 2 F_{1,k_1} F_{1,k_2} \big) \Big]_{t^r x^d}=0\ . \end{multline*} In fact, the $g\equiv r+k_1+k_2$ mod 2 equation above is not new. The genus $g$ and codimension $r_1=r-k_2+1$ case of partition $(k_1)$ yields $$\Big[ \exp(-\widetilde{\gamma})\cdot \left(\kappa_{k_1-1}t^{k_1-1} + 2 F_{1,k_1}\right) \Big]_{t^{r_1} x^d}=0\ .$$ After multiplication with $\kappa_{k_2-1}t^{k_2-1}$, we obtain $$\Big[ \exp(-\widetilde{\gamma}) \cdot \left(\kappa_{k_1-1}\kappa_{k_2-1}t^{k_1+k_2-2} + 2 \kappa_{k_2-1}t^{k_2-1} F_{1,k_1}\right) \Big]_{t^{r} x^d}=0\ .$$ Summed with the corresponding equation with $k_1$ and $k_2$ interchanged yields the above $g\equiv r+k_1+k_2$ mod 2 case. \label{vxvx} \subsection{Expanded form revisited} Consider the partition $\sigma=(k_1k_2\cdots k_\ell)$ with distinct parts. Relation $\mathsf{R}(g,r,d,\sigma)$, in the $g\equiv r+|\sigma|$ mod 2 case, is the vanishing of \begin{multline*} \Big[ \exp(-\widetilde{\gamma}) \cdot \left( \sum_{\sigma^{*,\bullet}\in \mathcal{S}^*(\sigma)} (1-\delta_{0,|\sigma^*|}) \prod_{j=1}^{\ell(\sigma^*)} \kappa_{\sigma^*_{j}-1} t^{\sigma^*_j-1} \prod_{i=1}^{\ell(\sigma^{*,\bullet})} F_{\ell(\sigma^{(i)}), |\sigma^{(i)}|} \right) \Big]_{t^rx^d} \end{multline*} since all the factors $m(\sigma^{*,\bullet})$ are 1. In the $g\equiv r+|\sigma|+1$ mod 2 case, $\mathsf{R}(g,r,d,\sigma)$ is the vanishing of \begin{multline*} \Big[ \exp(-\widetilde{\gamma}) \cdot \left( \sum_{\sigma^{*,\bullet}\in \mathcal{S}^*(\sigma)} (1+\delta_{0,|\sigma^*|}) \prod_{j=1}^{\ell(\sigma^*)} \kappa_{\sigma^*_{j}-1} t^{\sigma^*_j-1} \prod_{i=1}^{\ell(\sigma^{*,\bullet})} F_{\ell(\sigma^{(i)}), |\sigma^{(i)}|} \right) \Big]_{t^rx^d} \end{multline*} for the same reason. If $\sigma$ has repeated parts, the relation $\mathsf{R}(g,r,d,\sigma)$ is obtained by viewing the parts as distinct and specializing the indicies afterwards. For example, the two cases for $\sigma=(k^2)$ are as follows. If $g\equiv r+2k$ mod 2, we have \begin{equation*}\Big[ \exp(-\widetilde{\gamma})\cdot \big( \kappa_{k-1}\kappa_{k-1}t^{2k-2} +2\kappa_{k-1}t^{k-1} F_{1,k}\big) \Big]_{t^r x^d}=0\ . \end{equation*} If $g\equiv r+2k+1$ mod 2, we have \begin{multline*}\Big[ \exp(-\widetilde{\gamma})\cdot \big( \kappa_{k-1}\kappa_{k-1}t^{2k-2} +2\kappa_{k-1} t^{k-1}F_{1,k} \\ + 2 F_{2,2k} + 2 F_{1,k} F_{1,k} \big) \Big]_{t^r x^d}=0\ . \end{multline*} The factors occur via repetition of terms in the formulas for distinct parts. \begin{Proposition} The relation $\mathsf{R}(g,r,d,\sigma)$ in the $g\equiv r+|\sigma|$ mod 2 case is a consequence of the relations in $\mathsf{R}(g,r',d,\sigma')$ where $g\equiv r'+|\sigma'|+1$ mod 2 and $\sigma'\subset \sigma$ is a strictly smaller partition. \end{Proposition} \begin{proof} The strategy follows the example of the phenonenon already discussed in Section \ref{vxvx}. If $g\equiv r+|\sigma|$ mod 2, then for every subpartition $\tau\subset \sigma$ of odd length, we have $$g \equiv r- |\tau|+ \ell(\tau) + |\sigma/\tau| +1 \ \mod \ 2\ $$ where $\sigma/\tau$ is the complement of $\tau$. The relation $$\prod_i \kappa_{\tau_i-1} \cdot \mathsf{R}(g,r- |\tau|+ \ell(\tau), d, \sigma/\tau)$$ is of codimension $r$. Let $g\equiv r+|\sigma|$ mod 2, and let $\sigma$ have distinct parts. The formula \begin{multline}\label{ggee} \mathsf{R}(g,r,d,\sigma)= \\ \sum_{\tau\subset \sigma} \left(\frac{2^{\ell(\tau)+2}-2}{\ell(\tau)+1}\right) B_{\ell(\tau)+1} \cdot \prod_i \kappa_{\tau_i-1} \cdot \mathsf{R}\Big(g,r- |\tau|+ \ell(\tau), d, \sigma/\tau\Big) \end{multline} follows easily by grouping like terms and the Bernoulli identity \begin{equation} \label{bpp} \sum_{k\geq 1} \binom{n}{2k-1} \left(\frac{2^{2k+1} -2}{2k}\right) B_{2k} = - \left(\frac{2^{n+2} -2}{n+1}\right) B_{n+1}\ \end{equation} for $n>0$. The sum in \eqref{ggee} is over all subpartitions $\tau\subset \sigma$ of odd length. The proof of the Bernoulli identity \eqref{bpp} is straightforward. Let $$ a_i = \left(\frac{2^{i+2} -2}{i+1}\right) B_{i+1}\ , \ \ \ A(x)=\sum_{i=0}^\infty a_i \frac{x^i}{i!} .$$ Using the definition of the Bernoulli numbers as $$\frac{x}{e^x -1} = \sum_{i=0}^\infty B_i \frac{x^i}{i!}\ ,$$ we see $$A(x) = \frac{2}{x} \sum_{i=0}^\infty (2^i-1) B_r \frac{x^r}{r!} = \frac{2}{x}\left( \frac{2x}{e^{2x}-1} - \frac{x}{e^x-1}\right) = -\left(\frac{2}{1+e^x}\right)\ .$$ The identity \eqref{bpp} follows from the series relation $$e^x A(x)= - A(x)-2\ .$$ Formula \eqref{ggee} is valid for $\mathsf{R}(g,r,d,\sigma)$ even when $\sigma$ has repeated parts: the sum should be interpreted as running over all odd subsets $\tau\subset \sigma$ (viewing the parts of $\sigma$ as distinct). \end{proof} \subsection{Recasting} We will recast the relations $\mathsf{R}(g,r,d,\sigma)$ in case $g\equiv r + |\sigma|+1$ mod 2 in a more convenient form. The result will be crucial to the further analysis in Section \ref{trans}. Let $g\equiv r + |\sigma|+1$ mod 2, and let $\mathsf{S}(g,r,d,\sigma)$ denote the $\kappa$ polynomial $$|\text{Aut}|\Big[ \exp\left(-\widetilde{\gamma}(t,x) +\sum_{\sigma \neq \emptyset} \Big( F_{\ell(\sigma),|\sigma|}+ \frac{\delta_{\ell(\sigma),1}}{2} \kappa_{|\sigma|-1}\Big) \frac{\mathbf{p}^\sigma}{|\text{Aut}(\sigma)|} \right) \Big]_{t^{r}x^d\mathbf{p}^\sigma}\ .$$ We can write $\mathsf{S}(g,r,d,\sigma)$ in terms of our previous relations $\mathsf{R}(g,r',d,\sigma')$ satisfying $g\equiv r'+|\sigma'|+1$ mod 2 and $\sigma'\subset \sigma$: If $g\equiv r+|\sigma|+1$ mod 2, then for every subpartition $\tau\subset \sigma$ of even length (including the case $\tau=\emptyset$), we have $$g \equiv r- |\tau|+ \ell(\tau) + |\sigma/\tau| +1 \ \mod \ 2\ $$ where $\sigma/\tau$ is the complement of $\tau$. The relation $$\prod_i \kappa_{\tau_i-1} \cdot \mathsf{R}(g,r- |\tau|+ \ell(\tau), d, \sigma/\tau)$$ is of codimension $r$. In order to express $\mathsf{S}$ in terms of $\mathsf{R}$, we define $z_i\in \mathbb{Q}$ by $$\frac{2}{e^x + e^{-x}} = \sum_{i=0}^\infty z_i \frac{x^i}{i!}\ .$$ Let $g\equiv r+|\sigma|+1$ mod 2, and let $\sigma$ have distinct parts. The formula \begin{equation}\label{ggeee} \mathsf{S}(g,r,d,\sigma)= \\ \sum_{\tau\subset \sigma} \frac{z_{\ell(\tau)}}{2^{\ell(\tau)+1}} \cdot \prod_i \kappa_{\tau_i-1} \cdot \mathsf{R}\Big(g,r- |\tau|+ \ell(\tau), d, \sigma/\tau\Big) \end{equation} follows again grouping like terms and the combinatorial identity \begin{equation} \label{bppp} \sum_{i\geq 0} \binom{n}{i} \frac{z_i}{2^{i}+1} = - \frac{z_n}{2^{n+1}} - \frac{1}{2^n} \end{equation} for $n>0$. The sum in \eqref{ggeee} is over all subpartitions $\tau\subset \sigma$ of even length. As before, there the identity \eqref{bppp} is straightforward to prove. We see $$ \ \ \ Z(x)=\sum_{i=0}^\infty \frac{z_i}{2^{i+1}} \frac{x^i}{i!} = \frac{1}{e^{x/2} + e^{-x/2}}\ .$$ The identity \eqref{bppp} follows from the series relation $$e^x Z(x)= e^{x/2} -Z(x) .$$ Formula \eqref{ggee} is valid for $\mathsf{S}(g,r,d,\sigma)$ even when $\sigma$ has repeated parts: the sum should be interpreted as running over all even subsets $\tau\subset \sigma$ (viewing the parts of $\sigma$ as distinct). We have proved the following result. \begin{Proposition} In $R^r({\mathcal{M}}_g)$, the relation \label{better} \begin{equation*} \Big[ \exp\left(-\widetilde{\gamma}(t,x) +\sum_{\sigma \neq \emptyset} \Big( F_{\ell(\sigma),|\sigma|}+ \frac{\delta_{\ell(\sigma),1}}{2} \kappa_{|\sigma|-1}\Big) \frac{\mathbf{p}^\sigma}{|\text{\em Aut}(\sigma)|} \right) \Big]_{t^{r}x^d\mathbf{p}^\sigma} = 0 \end{equation*} holds when $g-2d-1 +|\sigma| < r$ and $g\equiv r + |\sigma|+1$ mod 2. \end{Proposition} \section{Transformation} \label{trans} \subsection{Differential equations} The function $\Phi$ satisfies a basic differential equation obtained from the series definition: $$\frac{d}{dx} (\Phi- tx \frac{d}{dx} \Phi) = -\frac{1}{t} \Phi \ .$$ After expanding and dividing by $\Phi$, we find $$- tx \frac{\Phi_{xx}}{\Phi} - t \frac{\Phi_x}{\Phi} + \frac{\Phi_x}{\Phi} = -\frac{1}{t} \ $$ which can be written as \begin{equation} \label{diffe} -t^2x \gamma^*_{xx} = t^2 x (\gamma^*_x)^2 + t^2 \gamma^*_x -t \gamma^*_x -1 \ \end{equation} where, as before, $\gamma^*=\log(\Phi)$. Equation \eqref{diffe} has been studied by Ionel in {\em Relations in the tautological ring} \cite{Ion}. We present here results of hers which will be useful for us. To kill the pole and match the required constant term, we will consider the function \begin{equation}\label{f4} \Gamma=-t\left(\sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)}t^{2i-1} + \gamma^* \right)\ . \end{equation} The differential equation \eqref{diffe} becomes $$tx \Gamma_{xx} = x (\Gamma_x)^2 +(1-t) \Gamma_x -1 \ .$$ The differential equation is easily seen to uniquely determine $\Gamma$ once the initial conditions $$\Gamma(t,0) = - \sum_{i\geq 1} \frac{B_{2i}}{2i(2i-1)} t^{2i}$$ are specified. By Ionel's first result, $$\Gamma_x = \frac{-1+\sqrt{1+4x}}{2x} + \frac{t}{1+4x} + \sum_{k=1}^\infty \sum_{j=0}^k t^{k+1} q_{k,j}(-x)^j(1+4x)^{-j-\frac{k}{2}-1}\ $$ where the postive integers $q_{k,j}$ (defined to vanish unless $k\geq j \geq 0$) are defined via the recursion $$q_{k,j} = (2k+4j-2)q_{k-1,j-1} + (j+1)q_{k-1,j} + \sum_{m=0}^{k-1} \sum_{l=0}^{j-1} q_{m,l} q_{k-1-m,j-1-l}\ $$ from the initial value $q_{0,0}=1$. Ionel's second result is obtained by integrating $\Gamma_x$ with respect to $x$. She finds $$\Gamma = \Gamma(0,x) + \frac{t}{4} \log(1+4x) -\sum_{k=1}^\infty \sum_{j=0}^k t^{k+1} c_{k,j} (-x)^j (1+4x)^{-j-\frac{k}{2}}\ $$ where the coefficients $c_{k,j}$ are determined by $$q_{k,j}=(2k+4j)c_{k,j} +(j+1) c_{k,j+1}$$ for $k\geq 1$ and $k\geq j\geq 0$. While the derivation of the formula for $\Gamma_x$ is straightforward, the formula for $\Gamma$ is quite subtle as the intial conditions (given by the Bernoulli numbers) are used to show the vanishing of constants of integration. Said differently, the recursions for $q_{k,j}$ and $c_{k,j}$ must be shown to imply the formula $$c_{k,0} = \frac{B_{k+1}}{k(k+1)}\ .$$ A third result of Ionel's is the determination of the extremal $c_{k,k}$, $$\sum_{k=1}^\infty c_{k,k} z^k = \log\left( \sum_{k=1}^\infty \frac{(6k)!}{(2k)!(3k)!} \left(\frac{z}{72}\right)^k \right)\ .$$ The formula for $\Gamma$ becomes simpler after the following very natural change of variables, \begin{equation}\label{varch} u= \frac{t}{\sqrt{1+4x}} \ \ \ \text{and} \ \ \ y= \frac{-x}{1+4x} \ . \end{equation} The change of variables defines a new function $$\widehat{\Gamma}(u,y) = \Gamma(t,x) \ .$$ The formula for $\Gamma$ implies \begin{equation}\label{f44} \frac{1}{t} \widehat{\Gamma}(u,y) = \frac{1}{t} \widehat{\Gamma}(0,y) -\frac{1}{4}\log(1+4y) -\sum_{k=1}^\infty \sum_{j=0}^k c_{k,j}u^k y^j \ . \end{equation} Ionel's fourth result relates coefficients of series after the change of variables \eqref{varch}. Given any series $$P(t,x) \in \mathbb{Q}[[t,x]],$$ let $\widehat{P}(u,y)$ be the series obtained from the change of variables \eqref{varch}. Ionel proves the coefficient relation $$\big[ P(t,x) \big]_{t^r x^d} = (-1)^d \big[ (1+4y)^{\frac{r+2d-2}{2}} \cdot \widehat{P}(u,y) \big]_{u^r y^d}\ .$$ \subsection{Analysis of the relations of Proposition \ref{vtw}} We now study in detail the simple relations of Proposition \ref{vtw}, \begin{equation*} \big[ \exp(-\widetilde{\gamma}) \big]_{t^rx^d} =0 \ \in R^r({\mathcal{M}}_g) \end{equation*} when $g-2d-1< r$ and $g\equiv r+1 \hspace{-5pt} \mod 2$. Let $$\widehat{\gamma}(u,y) = \widetilde{\gamma}(t,x)$$ be obtained from the variable change \eqref{varch}. Equations \eqref{f444}, \eqref{f4}, and \eqref{f44} together imply \begin{equation*} \widehat{\gamma}(u,y) = \frac{\kappa_0}{4}\log(1+4y)+ \sum_{k=1}^\infty \sum_{j=0}^k \kappa_k c_{k,j}u^k y^j \ \label{kakq} \end{equation*} modulo $\kappa_{-1}$ terms which we set to $0$. Applying Ionel's coefficient result, \begin{eqnarray*} \big[ \exp(-\widetilde{\gamma}) \big]_{t^rx^d}& = & \big[ (1+4y)^{\frac{r+2d-2}{2}} \cdot \exp(-\widehat{\gamma}) \big]_{u^r y^d} \\ & = & \left[ (1+4y)^{\frac{r+2d-2}{2}-\frac{\kappa_0}{4}} \cdot \exp(- \sum_{k=1}^\infty \sum_{j=0}^k \kappa_k c_{k,j}u^k y^j ) \right]_{u^r y^d} \\ & = & \left[ (1+4y)^{\frac{r-g+2d-1}{2}} \cdot \exp(- \sum_{k=1}^\infty \sum_{j=0}^k \kappa_k c_{k,j}u^k y^j ) \right]_{u^r y^d} \ . \end{eqnarray*} In the last line, the substitution $\kappa_0=2g-2$ has been made. Consider first the exponent of $1+4y$. By the assumptions on $g$ and $r$ in Proposition \ref{vtw}, $$\frac{r-g+2d-1}{2}\geq 0$$ and the fraction is integral. Hence, the $y$ degree of the prefactor $$(1+4y)^{\frac{r-g+2d-1}{2}}$$ is exactly $\frac{r-g+2d-1}{2}$. The $y$ degree of the exponential factor is bounded from above by the $u$ degree. We conclude $$\left[ (1+4y)^{\frac{r-g+2d-1}{2}} \cdot \exp(- \sum_{k=1}^\infty \sum_{j=0}^k \kappa_k c_{k,j}u^k y^j ) \right]_{u^r y^d} =0$$ is the {\em trivial} relation unless $$r \geq d - {\frac{r-g+2d-1}{2}} = -\frac{r}{2} +\frac{g+1}{2} \ .$$ Rewriting the inequality, we obtain $3r \geq g+1$ which is equivalent to $r > \lfloor \frac{g}{3} \rfloor$. The conclusion is in agreement with the proven freeness of $R^*({\mathcal{M}}_g)$ up to (and including) degree $\lfloor \frac{g}{3} \rfloor$. A similar connection between Proposition \ref{vtw} and Ionel's relations in \cite{Ion} has also been found by Shengmao Zhu \cite{zhu}. \subsection{Analysis of the relations of Theorem \ref{mmnn}} For the relations of Theorem \ref{mmnn}, we will require additional notation. To start, let $$\gamma^c(u,y) = \sum_{k=1}^\infty \sum_{j=0}^k \kappa_k c_{k,j}u^k y^j \ .$$By Ionel's second result, \begin{equation}\label{vvbb} \frac{1}{t}\Gamma = \frac{1}{t} \Gamma(0,x) + \frac{1}{4} \log(1+4x) -\sum_{k=1}^\infty \sum_{j=0}^k t^{k} c_{k,j} (-x)^j (1+4x)^{-j-\frac{k}{2}}\ . \end{equation} Let $c_{k,j}^0 = c_{k,j}$. We define the constants $c_{k,j}^n$ for $n\geq 1$ by \begin{multline*} \left( x \frac{d}{dx}\right)^n \frac{1}{t}\Gamma =\left( x \frac{d}{dx}\right)^{n-1} \left( \frac{-1}{2t} + \frac{1}{2t}\sqrt{1+4x}\right)\\ - \sum_{k=0}^\infty \sum_{j=0}^{k+n} t^{k} c^n_{k,j} (-x)^j (1+4x)^{-j-\frac{k}{2}} \ . \end{multline*} \begin{Lemma} \label{gqq2} For $n>0$, there are constants $b^n_j$ satisfying $$ \left( x \frac{d}{dx}\right)^{n-1} \left(\frac{1}{2t}\sqrt{1+4x}\right) = \sum_{j=0}^{n-1} b^n_j u^{-1} y^j \ . $$ Moreover, $b^n_{n-1} = -2^{n-2}\cdot(2n-5)!!$ where $(-1)!!=1$ and $(-3)!!=-1$. \end{Lemma} \begin{proof} The result is obtained by simple induction. The negative evaluations $(-1)!!=1$ and $(-3)!!=-1$ arise from the $\Gamma$-regularization. \end{proof} \begin{Lemma}\label{ggrr} For $n>0$, we have $c_{0,n}^n = 4^{n-1}(n-1)!$. \end{Lemma} \begin{proof} The coefficients $c_{0,n}^n$ are obtained directly from the $t^0$ summand $\frac{1}{4} \log(1+4x)$ of \eqref{vvbb}. \end{proof} \begin{Lemma} For $n>0$ and $k>0$, we have $$c_{k,k+n}^n = (6k)(6k+4)\cdots (6k+4(n-1))\ c_{k,k}.$$ \label{gqq3} \end{Lemma} \begin{proof} The coefficients $c^n_{k,k+n}$ are extremal. The differential operators $x \frac{d}{dx}$ must always attack the $(1+4x)^{-j-\frac{k}{2}}$ in order to contribute $c^n_{k,k+n}$. The formula follows by inspection. \end{proof} Consider next the full set of equations given by Theorem \ref{mmnn} in the expanded form of Section \ref{LLL}. The function $F_{n,m}$ may be rewritten as \begin{eqnarray*} F_{n,m}(t,x) & =& - \sum_{d=1}^\infty \sum_{s=-1}^\infty \widetilde{C}^d_{s}\ \kappa_{s+m} t^{s+m} \frac{d^n x^d}{d!} \\ & = & -t^m \left(x\frac{d}{dx}\right)^n \sum_{d=1}^\infty \sum_{s=-1}^\infty \widetilde{C}^d_{s}\ \kappa_{s+m} t^{s} \frac{x^d}{d!}. \end{eqnarray*} We may write the result in terms of the constants $b^n_j$ and $c^n_{k,j}$, \begin{multline*} t^{-(m-n)}F_{n,m} = -\delta_{n,1}\frac{\kappa_{m-1}}{2} \\ + (1+4y)^{-\frac{n}{2}} \Big( \sum_{j=0}^{n-1}\kappa_{m-1}b_j^n u^{n-1} y^j - \sum_{k=0}^\infty \sum_{j=0}^{k+n} \kappa_{k+m} c^n_{k,j} u^{k+n} y^j \Big) \end{multline*} Define the functions $G_{n,m}(u,y)$ by $$G_{n,m}(u,y) = \sum_{j=0}^{n-1}\kappa_{m-1}b_j^n u^{n-1} y^j - \sum_{k=0}^\infty \sum_{j=0}^{k+n} \kappa_{k+m} c^n_{k,j} u^{k+n} y^j \ .$$ Let $\sigma=(1^{a_1}2^{a_2}3^{a_3} \ldots)$ be a partition of length $\ell(\sigma)$ and size $|\sigma|$. We assume the parity condition \begin{equation}\label{par2} g\equiv r + |\sigma|+1\ . \end{equation} Let $G_\sigma^\pm(u,y)$ be the following function associated to $\sigma$, $$G_\sigma^\pm(u,y) = \sum_{\sigma^{\bullet}\in \mathcal{S}(\sigma)} \prod_{i=1}^{\ell(\sigma^{\bullet})} \left(G_{\ell(\sigma^{(i)}), |\sigma^{(i)}|} \pm \frac{\delta_{\ell(\sigma^{(i)}),1}}{2} \sqrt{1+4y}\ \kappa_{|\sigma^{(i)}|-1}\right)\ .$$ The relations of Theorem \ref{mmnn} in the the expanded form of Section \ref{exf} written in the variables $u$ and $y$ are \begin{equation*} \Big[ (1+4y)^{\frac{r-|\sigma|-g+2d-1}{2}} \exp(-\gamma^c) \left( G_\sigma^+ + G_\sigma^-\right) \Big]_{u^{r-|\sigma|+\ell(\sigma)}y^d} = 0 \end{equation*} In fact, the relations of Proposition \ref{better} here take a much more efficient form. We obtain the following result. \begin{Proposition} In $R^r({\mathcal{M}}_g)$, the relation \label{midb} \begin{equation*} \Big[ (1+4y)^{\frac{r-|\sigma|-g+2d-1}{2}} \exp\left(-\gamma^c -\sum_{\sigma \neq \emptyset} G_{\ell(\sigma),|\sigma|} \frac{\mathbf{p}^\sigma}{|\text{\em Aut}(\sigma)|} \right) \Big]_{u^{r-|\sigma|+\ell(\sigma)}y^d\mathbf{p}^\sigma} = 0 \end{equation*} holds when $g-2d-1 +|\sigma| < r$ and $g\equiv r + |\sigma|+1$ mod 2. \end{Proposition} Consider the exponent of $1+4y$. By the inequality and the parity condition \eqref{par2}, $$\frac{r-|\sigma|-g+2d-1}{2}\geq 0$$ and the fraction is integral. Hence, the $y$ degree of the prefactor \begin{equation}\label{xcx3} (1+4y)^{\frac{r-|\sigma|-g+2d-1}{2}} \end{equation} is exactly $\frac{r-|\sigma|-g+2d-1}{2}$. The $y$ degree of the exponential factor is bounded from above by the $u$ degree. We conclude the relation of Theorem 4 is {\em trivial} unless $$r-|\sigma|+\ell(\sigma) \geq d - {\frac{r-|\sigma|-g+2d-1}{2}} = -\frac{r-|\sigma|}{2} +\frac{g+1}{2} \ .$$ Rewriting the inequality, we obtain $$3r \geq g+1 + 3|\sigma|-2\ell(\sigma)$$ which is consistent with the proven freeness of $R^*({\mathcal{M}}_g)$ up to (and including) degree $\lfloor \frac{g}{3} \rfloor$. \subsection{Another form} A subset of the equations of Proposition \ref{midb} admits an especially simple description. Consider the function \begin{multline*} H_{n,m}(u) = 2^{n-2}(2n-5)!! \ \kappa_{m-1} u^{n-1} + 4^{n-1}(n-1)!\ \kappa_m u^n \\ + \sum_{k=1}^\infty (6k)(6k+4)\cdots (6k+4(n-1)) c_{k,k}\ \kappa_{k+m} u^{k+n} \ . \end{multline*} \begin{Proposition} In $R^r({\mathcal{M}}_g)$, the relation \label{best} \begin{equation*} \Big[ \exp\left(-\sum_{k=1}^\infty c_{k,k} \kappa_k u^k -\sum_{\sigma \neq \emptyset} H_{\ell(\sigma),|\sigma|} \frac{\mathbf{p}^\sigma}{|\text{\em Aut}(\sigma)|} \right) \Big]_{u^{r-|\sigma|+\ell(\sigma)}\mathbf{p}^\sigma} = 0 \end{equation*} holds when $3r \geq g+1 + 3|\sigma|-2\ell(\sigma)$ and $g\equiv r + |\sigma|+1$ mod 2. \end{Proposition} \begin{proof} Let $g\equiv r + |\sigma|+1$, and let $$ \frac{3}{2}r -\frac{1}{2}g - \frac{1}{2}-\frac{3}{2} |\sigma| + \ell(\sigma) = \Delta >0\ .$$ By the parity condition, $\delta$ is an integer. For $0 \leq \delta \leq \Delta$, let $$\mathsf{E}_\delta(g,r,\sigma) = \Big[ \exp\left(-\gamma^c +\sum_{\sigma \neq \emptyset} G_{\ell(\sigma),|\sigma|} \frac{\mathbf{p}^\sigma}{|\text{Aut}(\sigma)|} \right) \Big]_{u^{r-|\sigma|+\ell(\sigma)}y^{r-|\sigma|+\ell(\sigma)-\delta}\mathbf{p}^\sigma} \ . $$ The $\delta=0$ case is special. Only the monomials of $G_{n,m}$ of equal $u$ and $y$ degree contribute to the relations of Proposition \ref{midb}. By Lemmas \ref{gqq2} - \ref{gqq3}, $H_{u,m}(uy)$ is exactly the subsum of $G_{n,m}$ consisting of such monomials. Similarly, $$\sum_{k=1}^\infty c_{k,k} \kappa_k u^ky^k $$ is the subsum of $\gamma^c$ of monomials of equal $u$ and $y$ degree. Hence, \begin{multline*} \mathsf{E}_0(g,r,\sigma) = \\ \Big[ \exp\left(-\sum_{k=1}^\infty c_{k,k} \kappa_k u^ky^k -\sum_{\sigma \neq \emptyset} H_{\ell(\sigma),|\sigma|}(uy) \frac{\mathbf{p}^\sigma}{|\text{Aut}(\sigma)|} \right) \Big]_{(uy)^{r-|\sigma|+\ell(\sigma)}\mathbf{p}^\sigma} = \\ \Big[ \exp\left(-\sum_{k=1}^\infty c_{k,k} \kappa_k u^k -\sum_{\sigma \neq \emptyset} H_{\ell(\sigma),|\sigma|}(u) \frac{\mathbf{p}^\sigma}{|\text{Aut}(\sigma)|} \right) \Big]_{u^{r-|\sigma|+\ell(\sigma)}\mathbf{p}^\sigma} \ . \end{multline*} We consider the relations of Proposition \ref{midb} for fixed $g$, $r$, and $\sigma$ as $d$ varies. In order to satisfy the inequalty $g-2d-1 +|\sigma| < r$, let $$d(\widehat{\delta}) = \frac{-r+g +1+ |\sigma|}{2}+ \widehat{\delta}\ , \ \ \ \text{for} \ \ \widehat{\delta}\geq 0 .$$ For $0 \leq \widehat{\delta} \leq \Delta$, relation of Proposition \ref{midb} for $g$, $r$, $\sigma$, and $d(\widehat{\delta})$ is $$\sum_{i=0}^{\widehat{\delta}} 4^i \binom{\widehat{\delta}}{i} \cdot \mathsf{E}_{\Delta-\widehat{\delta}+i}(g,r,\sigma) = 0\ .$$ As $\widehat{\delta}$ varies, we therefore obtain all the relations \begin{equation}\label{lxx3} \mathsf{E}_{\delta}(g,r,\sigma) = 0 \ \end{equation} for $0\leq \delta \leq \Delta$. The relations of Proposition \ref{best} are obtained when $\delta=0$ in \eqref{lxx3}. \end{proof} The main advantage of Proposition \ref{best} is the dependence on only the function \begin{equation}\label{njjk} \sum_{k=1}^\infty c_{k,k} z^k = \log\left( \sum_{k=1}^\infty \frac{(6k)!}{(2k)!(3k)!} \left(\frac{z}{72}\right)^k \right)\ . \end{equation} Proposition \ref{best} only provides finitely many relations for fixed $g$ and $r$. In Section \ref{pppp}, we show Proposition \ref{best} is equivalent to the Faber-Zagier conjecture. \subsection{Relations left behind} In our analysis of relations obtained from the virtual geometry of the moduli space of stable quotients, twice we have discarded large sets of relations. In Section \ref{exrel}, instead of analyzing all of the geometric possibilities \begin{equation*} \nu_*\left(\prod_{i=1}^n\epsilon_*(s^{a_i} \omega^{b_i}) \cdot 0^c \cap [Q_g(\PP^1,d)]^{vir} \right)= 0\ \ \text{in} \ A^*({\mathcal{M}}_g,\mathbb{Q}) \ , \end{equation*} we restricted ourselves to the case where $a_i=1$ for all $i$. And just now, instead of keeping all the relations \eqref{lxx3}, we restricted ourselves to the ${\delta=0}$ cases. In both instances, the restricted set was chosen to allow further analysis. In spite of the discarding, we will arrive at the Faber-Zagier relations. We expect the discarded relations are all redundant (consistent with Conjecture 2), but we do not have a proof. \section{Equivalence} \label{pppp} \subsection{Notation} The relations in Proposition~\ref{best} have a similar flavor to the Faber-Zagier relations. We start with formal series related to \[ A(z) = \sum_{i=0}^\infty \frac{(6i)!}{(3i)!(2i)!}\left(\frac{z}{72}\right)^i, \] we insert classes $\kappa_r$, we exponentiate, and we extract coefficients to obtain relations among the $\kappa$ classes. In order to make the similarities clearer, we will introduce additional notation. If $F$ is a formal power series in $z$, \[ F = \sum_{r=0}^\infty c_rz^r \] with coefficients in a ring, let \[ \{F\}_\kappa = \sum_{r=0}^\infty c_r\kappa_rz^r \] be the series with $\kappa$-classes inserted. Let $A$ be as above, and let \[ B(z) = \sum_{i=0}^\infty \frac{(6i)!}{(3i)!(2i)!}\frac{6i+1}{6i-1}\left(\frac{z}{72}\right)^i \] be the second power series appearing in the Faber-Zagier relations. Let $$C = \frac{B}{A}\ ,$$ and let $$E = \exp(-\{\log(A)\}_\kappa) = \exp\left(-\sum_{k=1}^\infty c_{k,k}\kappa_kz^k\right).$$ We will rewrite the Faber-Zagier relations and the relations of Proposition~\ref{best} in terms of $C$ and $E$. The equivalence between the two will rely on the principal differential equation satisfied by $C$, \begin{equation}\label{diffeq} 12z^2\frac{dC}{dz} = 1 + 4zC - C^2. \end{equation} \subsection{Rewriting the relations} The relations conjectured by Faber and Zagier are straightforward to rewrite using the above notation: \begin{multline}\label{FZ0} \Bigg[E\cdot\exp\Big(-\Big\{\log\big(1+p_3z+p_6z^2+\cdots\\ +C(p_1+p_4z+p_7z^2+\cdots)\big)\Big\}_\kappa\Big)\Bigg]_{z^rp^\sigma} = 0 \end{multline} for $3r \ge g+|\sigma|+1$ and $3r\equiv g+|\sigma|+1$ mod $2$. The above relation \eqref{FZ0} will be denoted $\FZ(r,\sigma)$. The stable quotient relations of Proposition~\ref{best} are more complicated to rewrite in terms of $C$ and $E$. Define a sequence of power series $(C_n)_{n\ge 1}$ by \begin{multline*} 2^{-n}C_n = 2^{n-2}(2n-5)!!z^{n-1} + 4^{n-1}(n-1)!z^n \\ + \sum_{k=1}^\infty (6k)(6k+4)\cdots(6k+4(n-1))c_{k,k}z^{k+n}. \end{multline*} We see $$H_{n,m}(z) = 2^{-n}z^{n-m}\{z^{m-n}C_n\}_\kappa.$$ The series $C_n$ satisfy \begin{equation}\label{Crecur} C_1 = C, \ \ \ \ C_{i+1} = \left(12z^2\frac{d}{dz}-4iz\right)C_i. \end{equation} Using the differential equation \eqref{diffeq}, each $C_n$ can be expressed as a polynomial in $C$ and $z$: \[ C_1 = C, \ \ C_2 = 1-C^2,\ \ C_3 = -8z-2C+2C^3, \ldots, \ . \] Proposition~\ref{best} can then be rewritten as follows (after an appropriate change of variables): \begin{equation}\label{SQ} \left[E\cdot\exp\left(-\sum_{\sigma\ne\emptyset}\{z^{|\sigma|-\ell(\sigma)}C_{\ell(\sigma)}\}_\kappa\frac{p^\sigma}{|\Aut(\sigma)|}\right)\right]_{z^rp^\sigma} = 0 \end{equation} for $3r \ge g+3|\sigma|-2\ell(\sigma)+1$ and $3r \equiv g+3|\sigma|-2\ell(\sigma)+1$ mod $2$. The above relation \eqref{SQ} will be denoted $\SQ(r,\sigma)$. The $\FZ$ and $\SQ$ relations now look much more similar, but the relations in (\ref{FZ0}) are indexed by partitions with no parts of size $2$ mod $3$ and satisfy a slightly different inequality. The indexing differences can be erased by observing that the variables $p_{3k}$ are actually not necessary in (\ref{FZ0}) if we are just interested in the \emph{ideal} generated by a set of relations (rather than the linear span). This observation follows from the identity \[ -\FZ(r,\sigma \sqcup 3a) = \kappa_a\FZ(r-a,\sigma) + \sum_\tau \FZ(r, \tau), \] where the sum runs over the $\ell(\sigma)$ partitions $\tau$ (possibly repeated) formed by increasing one of the parts of $\sigma$ by $3a$. If we remove the variables $p_{3k}$ and reindex the others by replacing $p_{3k+1}$ with $p_{k+1}$, we obtain the following equivalent form of the $\FZ$ relations: \begin{equation}\label{FZ} \Big[E\cdot\exp\big(-\big\{\log(1+C(p_1+p_2z+p_3z^2+\cdots))\big\}_\kappa\big)\Big]_{z^rp^\sigma} = 0 \end{equation} for $3r \ge g+3|\sigma|-2\ell(\sigma)+1$ and $3r \equiv g+3|\sigma|-2\ell(\sigma)+1$ mod $2$. \subsection{Comparing the relations} We now explain how to write the $\SQ$ relations (\ref{SQ}) as linear combinations of the $\FZ$ relations (\ref{FZ}) with coefficients in $\mathbb{Q}[\kappa_0,\kappa_1,\kappa_2,\ldots]$. In fact, the associated matrix will be triangular with diagonal entries equal to $1$. We start with further notation. For a partition $\sigma$, let \[ \FZ_\sigma = \left[\exp\left(-\left\{\log(1+C(p_1+p_2z+p_3z^2+\cdots))\right\}_\kappa\right)\right]_{p^\sigma} \] and \[ \SQ_\sigma = \left[\exp\left(-\sum_{\sigma\ne\emptyset}\{z^{|\sigma|-\ell(\sigma)}C_{\ell(\sigma)}\}_\kappa\frac{p^\sigma}{|\Aut(\sigma)|}\right)\right]_{p^\sigma} \] be power series in $z$ with coefficients that are polynomials in the $\kappa$ classes. The relations themselves are given by $$\FZ(r,\sigma)=[E\cdot\FZ_\sigma]_{z^r}\ , \ \ \ \SQ(r,\sigma)= [E\cdot\SQ_\sigma]_{z^r}\ .$$ It is straightforward to expand $\FZ_\sigma$ and $\SQ_\sigma$ as linear combinations of products of factors $\{z^a C^b\}$ for $a\ge 0$ and $b\ge 1$, with coefficients that are polynomials in the kappa classes. When expanded, $\FZ_\sigma$ always contains exactly one term of the form \begin{equation}\label{g66h} \{z^{a_1}C\}_\kappa\{z^{a_2}C\}_\kappa\cdots\{z^{a_m}C\}_\kappa\ . \end{equation} All the other terms involve higher powers of $C$. If we expand $\SQ_\sigma$, we can look at the terms of the form \eqref{g66h} to determine what the coefficients must be when writing the $\SQ_\sigma$ as linear combinations of the $\FZ_\sigma$. For example, \begin{align*} \SQ_{(111)} &= -\frac{1}{6}\{C_3\}_\kappa + \frac{1}{2}\{C_2\}_\kappa\{C_1\}_\kappa -\frac{1}{6}\{C_1\}_\kappa^3 \\ &= \frac{4}{3}\kappa_1z + \frac{1}{3}\{C\}_\kappa - \frac{1}{3}\{C^3\}_\kappa + \frac{1}{2}(\kappa_0 - \{C^2\}_\kappa)\{C\}_\kappa - \frac{1}{6}\{C\}_\kappa^3 \\ &= \left(\frac{4}{3}\kappa_1z\right) + \left(\left(\frac{1}{3} + \frac{\kappa_0}{2}\right)\{C\}_\kappa\right)\\ & \ \ \ \ \ \ \ \ \ \ \ \ + \left(-\frac{1}{3}\{C^3\}_\kappa -\frac{1}{2}\{C^2\}_\kappa\{C\}_\kappa -\frac{1}{6}\{C\}_\kappa^3\right) \\ &= \frac{4}{3}\kappa_1z\FZ_\emptyset + \left(-\frac{1}{3} - \frac{\kappa_0}{2}\right)\FZ_{(1)} + \FZ_{(111)}. \end{align*} In general we must check that the terms involving higher powers of $C$ also match up. The matching will require an identity between the coefficients of $C_i$ when expressed as polynomials in $C$. Define polynomials $f_{ij}\in\mathbb{Z}[z]$ by \[ C_i = \sum_{j=0}^if_{ij}C^j. \] It will also be convenient to write $f_{ij} = \sum_{k} f_{ijk}z^k$, so \[ C_i = \sum_{\substack{ j,k\ge 0 \\ j + 3k \le i }} f_{ijk}z^kC^j. \] If we define \[ F = 1+\sum_{i,j\ge 1}\frac{(-1)^{j-1}f_{ij}}{i!(j-1)!}x^iy^j \in \mathbb{Q}[z][[x,y]], \] then we will need a single property of the power series $F$. \begin{Lemma}\label{exponential} There exists a power series $G\in\mathbb{Q}[z][[x]]$ such that $F = e^{yG}$. \end{Lemma} \begin{proof} The recurrence \eqref{Crecur} for the $C_i$ together with the differential equation \eqref{diffeq} satisfied by $C$ yield a recurrence relation for the polynomials $f_{ij}$: \[ f_{i+1, j} = (j+1)f_{i, j+1} + 4(j-i)zf_{ij} - (j-1)f_{i, j-1}. \] This recurrence relation for the coefficients of $F$ is equivalent to a differential equation: \[ F_x = -yF_{yy} + 4zyF_y - 4zxF_x + yF. \] Now, let $G\in\mathbb{Q}[z][[x,y]]$ be $\frac{1}{y}$ times the logarithm of $f$ (as a formal power series). The differential equation for $F$ can be rewritten in terms of $G$: \[ G_x = -2G_y - yG_{yy}-(G + yG_y)^2 + 4z(G + yG_y) - 4zxG_x + 1. \] We now claim that the coefficient of $x^ky^l$ in $G$ is zero for all $k\ge 0, l\ge 1$, as desired. For $k = 0$ this is a consequence of the fact that $F = 1 + O(xy)$ and thus $G = O(x)$, and higher values of $k$ follow from induction using the differential equation above. \end{proof} We can now write the $\SQ_\sigma$ as linear combinations of the $\FZ_\sigma$. \begin{Theorem}\label{combination} Let $\sigma$ be a partition. Then $\SQ_\sigma - \FZ_\sigma$ is a $\mathbb{Q}$-linear combination of terms of the form \[ \kappa_{\mu}z^{|\mu|}\FZ_{\tau}, \] where $\mu$ and $\tau$ are partitions ($\mu$ possibly containing parts of size $0$) satisfying $\ell(\tau) < \ell(\sigma)$, $3|\mu| + 3|\tau| - 2\ell(\tau) \le 3|\sigma| - 2\ell(\sigma)$, and $$3|\mu| + 3|\tau| - 2\ell(\tau) \equiv 3|\sigma| - 2\ell(\sigma)\ \mod 2\ . $$ \end{Theorem} \begin{proof} We will need some additional notation for subpartitions. If $\sigma$ is a partition of length $\ell(\sigma)$ with parts $\sigma_1,\sigma_2,\ldots$ (ordered by size) and $S$ is a subset of $\{1,2,\ldots, \ell(\sigma)\}$, then let $\sigma_S \subset \sigma$ denote the subpartition consisting of the parts $(\sigma_i)_{i\in S}$. Using this notation, we explicitly expand $\SQ_\sigma$ and $\FZ_\sigma$ as sums over set partitions of $\{1,\ldots, \ell(\sigma)\}$: \[ \SQ_\sigma = \frac{1}{|\Aut(\sigma)|}\sum_{P\vdash \{1,\ldots,\ell(\sigma)\}}\prod_{S\in P} \left(\sum_{j,k}-f_{|S|,j,k}\{z^{|\sigma_S|-|S|+k}C^j\}_\kappa\right), \] \[ \FZ_\sigma = \frac{1}{|\Aut(\sigma)|}\sum_{P\vdash \{1,\ldots,\ell(\sigma)\}}\prod_{S\in P} \left((-1)^{|S|}(|S|-1)!\{z^{|\sigma_S|-|S|}C^{|S|}\}_\kappa\right). \] Matching coefficients for terms of the form \eqref{g66h} tells us what the linear combination must be. We claim \begin{align} \label{dcczz} \SQ_\sigma = &\sum_{\substack{R \vdash \{1,\ldots,\ell(\sigma)\} \\ P \sqcup Q = R \\ k:R\to \mathbb{Z}_{\ge 0}}}\frac{|\Aut(\sigma')|}{|\Aut(\sigma)|}\times \\ \nonumber &\prod_{S\in P}(-f_{|S|,0,k(S)}\kappa_{|\sigma_S|-|S|+k(S)}z^{|\sigma_S|-|S|+k(S)})\prod_{S\in Q}(f_{|S|,1,k(S)})\FZ_{\sigma'}, \end{align} where $\sigma'$ is the partition with parts $|\sigma_S| - |S| + 1 + k(S)$ for $S\in Q$. Using the vanishing $f_{i,j,k} = 0$ unless $j + 3k \le i$ and $j + 3k \equiv i\mod 2$, we easily check the above expression for $\SQ_\sigma$ is of the desired type. Expanding $\SQ_\sigma$ and $\FZ_{\sigma'}$ in \eqref{dcczz} and canceling out the terms involving the $f_{i,0,k}$ coefficients, it remains to prove \begin{align*} &\sum_{\substack{Q\vdash \{1,\ldots,\ell(\sigma)\} \\ k:Q\to\mathbb{Z}_{\ge 0} \\ j:Q\to\mathbb{N}}}\prod_{S\in Q} \left(-f_{|S|,j(S),k(S)}\{z^{|\sigma_S|-|S|+k(S)}C^{j(S)}\}_\kappa\right) \\ &= \sum_{\substack{Q\vdash \{1,\ldots,\ell(\sigma)\} \\ k:Q\to\mathbb{Z}_{\ge 0}}}\prod_{S\in Q}(f_{|S|,1,k(S)})\sum_{P\vdash \{1,\ldots,\ell(\sigma')\}}\prod_{S\in P}\left((-1)^{|S|}(|S|-1)!\{z^{|(\sigma')_S| - |S|}C^{|S|}\}_\kappa\right). \end{align*} A single term on the left side of the above equation is determined by choosing a set partition $Q_{\text{left}}$ of $\{1,\ldots,\ell(\sigma)\}$ and then for each part $S$ of $Q_{\text{left}}$ choosing a positive integer $j(S)$ and a nonnegative integer $k_{\text{left}}(S)$. We claim that this term is the sum of the terms of the right side given by choices $Q_{\text{right}}$, $k_{\text{right}}$, $P$ such that $Q_{\text{right}}$ is a refinement of $Q_{\text{left}}$ that breaks each part $S$ in $Q_{\text{left}}$ into exactly $j(S)$ parts in $Q_{\text{right}}$, $P$ is the associated grouping of the parts of $Q_{\text{right}}$, and the $k_{\text{right}}(S)$ satisfy $$k_{\text{left}}(S) = \sum_{T\subseteq S}k_{\text{right}}(T)\ .$$ These terms all are integer multiples of the same product of $\{z^aC^b\}_\kappa$ factors, so we are left with the identity \begin{equation}\label{l399} \frac{(-1)^{j_0-1}}{(j_0-1)!}f_{i_0,j_0,k_0} = \sum_{\substack{P\vdash \{1,\ldots,i_0\} \\ |P| = j_0 \\ k:P\to\mathbb{Z}_{\ge 0} \\ |k| = k_0}}\prod_{S\in P}f_{|S|,1,k(S)}. \end{equation} to prove. But by the exponential formula, identity \eqref{l399} is simply a restatement of Lemma~\ref{exponential}. \end{proof} The conditions on the linear combination in Theorem~\ref{combination} are precisely those needed so that multiplying by $E$ and taking the coefficient of $z^r$ allows us to write any $\SQ$ relation as a linear combination of $\FZ$ relations. The associated matrix is triangular with respect to the partial ordering of partitions by size, and the diagonal entries are equal to $1$. Hence, the matrix is invertible. We conclude the $\SQ$ relations are equivalent to the $\FZ$ relations.
1301.4318
\section{Introduction} Restricting to strong interactions, almost all of the hadrons are resonances. For lattice studies, due to the finiteness of the lattice volumes the smallest momenta come in units $2\pi/L$. Moreover, for unphysically heavy pion masses decay channels are often not open or the resulting phase space is small, leading to energy levels in the vicinity of the resonance energy. This motivates the identification of the low energy levels with masses of corresponding resonances. Eventually, towards physical pion masses and larger lattices, the interpretation becomes invalid and the observed energy levels show a more intricate pattern, related in the elastic channel to two-hadron states \cite{Luscher:1990ux,Luscher:1991cf}. Recent work, where correlators of only single hadron operators were studied \cite{Bulava:2010yg,Engel:2010my,Engel:2012qp}, found no clear signal of possibly coupling two-hadron states (with the possible exception of $s$ wave channels). It was concluded that for a full study one should include such interpolators explicitly. In \cite{Lang:2011mn,Lang:2012sv} it was demonstrated in meson correlation studies, that neglect of two-meson interpolators may obscure the obtained energy level picture in some cases. Attempts towards including meson-baryon interpolators are discussed in \cite{Gockeler:2012yj,Hall:2012wz} and a recent study including $\pi N$ interpolators in the negative parity nucleon sector demonstrated significant effects in the observed energy spectrum \cite{Lang:2012db}. The present work is a continuation of a study of single baryon correlators, with more ensembles and larger statistics as compared to \cite{Engel:2010my}. Like before we see no obvious signal of coupling meson-baryon channels (with a few possible exceptions where the meson-baryon system is in $s$ wave, as will be discussed). We therefore identify the lowest energy levels with baryon ground states and excitations. We use two mass identical light quarks with the Chirally Improved (CI) fermion action \cite{Gattringer:2000js,Gattringer:2000qu,Engel:2010my}. The strange quark is considered as valence quark, its mass fixed by setting the $\Omega$-mass to its physical value. The pion masses for the seven ensembles of 200--300 gauge configurations each range from 255 MeV to 596 MeV, with lattice size $16^3\times 32$ and lattice spacing between 0.1324 and 0.1398 fm. For two ensembles with light pion masses also lattices of size $12^3\times 24$ and $24^3\times 48$ were used to allow extrapolation to infinite volume. Other recent studies aiming at light and strange baryon excitations, some of them with 2+1 dynamical quarks, include \cite{WalkerLoud:2008bp,WalkerLoud:2008pj,Bulava:2009jb,Bulava:2010yg,Bulava:2011xj,% Edwards:2011jj,Mahbub:2010rm,Mahbub:2010vu,Menadue:2011pd,Mahbub:2012ri,Alexandrou:2012xk,% Arthur:2012yc}. In \cite{Edwards:2012fx} excited spectra for non-strange and strange baryon are derived from anisotropic lattices and standard improved Wilson fermions. See also recent reviews \cite{Bulava:2011np,Lin:2011ti,Fodor:2012gf} and references therein. In Sect. \ref{setup} we discuss the setup for our simulations and remark on the methods used for the data analysis. Results from the $16^3\times 32$ lattices for light and strange baryons are presented in Sections \ref{sec:baryons:light} and \ref{sec:baryons:strange} respectively. In Sect. \ref{sec:results_vol} the infinite volume extrapolation and uncertainties with regard to the strange quark mass chosen in our simulations are discussed. We conclude with a summary in Sect. \ref{summary}. \section{\label{setup}Setup of the Simulation and Analysis} The CI fermion action \cite{Gattringer:2000js,Gattringer:2000qu} results from a parametrization of a general fermion action connecting each site along gauge link paths to other sites up to distance three (in lattice units). This truncated ansatz is used to algebraically solve the Ginsparg-Wilson equation. The action consists of several hundred terms and obeys the Ginsparg-Wilson relation approximately. It was used in quenched \cite{Gattringer:2003qx,Burch:2006cc} and dynamical simulations \cite{Lang:2005jz}. It was found that the small eigenvalues fluctuate predominantly towards the inside of the Ginsparg-Wilson unit circle \cite{Gattringer:2008vj}. Exceptionally small eigenvalues are suppressed, which allows to simulate smaller pion masses on coarse lattices. For further improvement of the fermion action one level of stout smearing of the gauge fields \cite{Morningstar:2003gk} was included in its definition. The parameters are adjusted such that the value of the plaquette is maximized ($\rho=0.165$ following \cite{Morningstar:2003gk}). For the pure gauge field part of the action we use the tadpole-improved L\"uscher-Weisz gauge action \cite{Luscher:1984xn}. For a given gauge coupling we use the same assumed plaquette value for the different values of the bare quark mass parameter. The lattice spacing $a$ is defined as discussed in \cite{Engel:2011aa}, using the static potential with a Sommer parameter $r_0=0.48$ fm and setting the scale at the physical pion mass for each value of $\beta_{LW}$. This value of the Sommer parameter may be slightly too small for $n_f=2$, as has been argued recently \cite{Fritzsch:2012wq,Bali:2012qs}, where a value near 0.5 fm is preferred. All parameters as well as details of the implementation in the Hybrid Monte Carlo simulation \cite{Duane:1987de,Lang:2005jz} and various quality check are given in \cite{Engel:2010my,Engel:2011aa}. For reference we summarize the parameters of the used gauge field ensembles in Table \ref{tab:ensembles}. \begin{table*}[t] \begin{ruledtabular} \begin{tabular}{c c c c c c c c c c} set& $\beta_{LW}$ &$m_0$ &$m_s$& configs. &$m_\pi$ [MeV] &$L^3\times T \,[a^4]$ &$m_{\pi}L$ & $a$ [fm] \\ \hline A50& 4.70& -0.050 &-0.020 &200 &596(5) & $16^3\times 32$ &6.40 &0.1324(11) \\ A66& 4.70& -0.066 &-0.012 &200 &255(7) & $16^3\times 32$ &2.72 &0.1324(11) \\ B60& 4.65& -0.060 &-0.015 &300 &516(6) & $16^3\times 32$ &5.72 &0.1366(15) \\ B70& 4.65& -0.070 &-0.011 &200 &305(6) & $16^3\times 32$ &3.38 &0.1366(15) \\ C64& 4.58& -0.064 &-0.020 &200 &588(6) & $16^3\times 32$ &6.67 &0.1398(14) \\ C72& 4.58& -0.072 &-0.019 &200 &451(5) & $16^3\times 32$ &5.11 &0.1398(14) \\ C77& 4.58& -0.077 &-0.022 &300 &330(5) & $16^3\times 32$ &3.74 &0.1398(14) \\ \hline LA66& 4.70& -0.066 &-0.012 &~97 & & $24^3\times 48$ &4.08 &0.1324(11) \\ SC77& 4.58& -0.077 &-0.022 &600 & & $12^3\times 24$ &2.81 &0.1398(14) \\ LC77& 4.58& -0.077 &-0.022 &153 & & $24^3\times 48$ &5.61 &0.1398(14) \\ \end{tabular} \end{ruledtabular} \caption[Parameters of the simulation]{\label{tab:ensembles} \noindent Parameters of the simulation: Ensemble names are given in the first row. We show the gauge couplings $\beta_{LW}$, the light quark mass parameter $m_0$, the strange quark mass parameter $m_s$, the number of configurations analyzed (``configs.''), the pion mass and the volume $L^3\times T$ in lattice units. The dimensionless product of the pion mass with the spatial extent of the lattice, $m_\pi L$, enters finite volume corrections. We also give the lattice spacing $a$ as discussed in \cite{Engel:2011aa}. The three ensembles LA66, SC77 and LC77 are separated from the others by a horizontal line, since they are used only for a discussion of finite volume effects. For these ensembles we use the pion masses of A66 and C77, respectively. } \end{table*} In each baryon channel with given quantum numbers the eigenenergy levels are determined with the so-called variational method \cite{Luscher:1990ck,Michael:1985ne}. One uses interpolators with the correct symmetry properties and computes the cross-correlation matrix $C_{ik}(t)=\langle O_i(t) O_k(0)^\dagger\rangle$. One then solves the generalized eigenvalue problem \begin{equation} C(t) \vec u_n(t)=\lambda_n(t) C(t_0) \vec u_n(t) \end{equation} in order to approximately recover the energy eigenstates $|n\rangle$. The exponential decay of the eigenvalues \begin{equation}\label{eq:eigenvalues} \lambda_n(t)=\mathrm{e}^{-E_n\,(t-t_0)} (1+\mathcal{O}(\mathrm{e}^{-\Delta E_n(t-t_0)} )) \end{equation} allows us to obtain the energy values, where $\Delta E_n$ is the distance to other spectral values. In \cite{Blossier:2009kd} it was shown that for $t_0\leq t \leq 2 t_0$ the value of $\Delta E_n$ is the distance to the first neglected eigenenergy. In an actual computation the statistical fluctuations limit the values of $t_0$ and one estimates the fit range by identifying plateaus of the effective energy. The eigenvectors serve as fingerprints of the states, indicating their content in terms of the lattice interpolators. The quality of the results depends on the statistics and the set of lattice operators. The dependence on $t_0$ is studied. Larger values of $t_0$ increase the noise and reduce the possible fit range, although the results are consistent. In the final analysis we use $t_0=1$ (with the origin at 0). The statistical error is determined with single-elimination jack-knife. For the fits to the eigenvalues \eq{eq:eigenvalues} we use single exponential behavior but check the stability with double exponential fits; we take the correlation matrix for the correlated fits from the complete sample \cite{Engel:2011aa}. As an example we show eigenvalues for the four lowest states in the $\Sigma$ $1/2^+$ channel for three ensembles in Fig.~\ref{fig:sigma_1half_eigvals}. In general, we find very good agreement between the eigenstates of all considered ensembles. This suggests the interpretation of a signal with physical origin and in some cases serves to justify a fit relying on only few points. \begin{figure}[t] \noindent\includegraphics[width=\columnwidth,clip]{A66.C72.A50.suu_b7.evals_positive_110000001100000000000000110000000000.eps} \caption{ Eigenvalues for the four lowest states in the $\Sigma$ $1/2^+$ channel for ensembles A50, C72 and A66 (top to bottom) which covers the whole range of pion masses considered. } \label{fig:sigma_1half_eigvals} \end{figure} The set of interpolators used should be capable to approximate the eigenstates. On the other hand, too large a set may add statistical noise. In practice one tries to reduce the number of interpolators to a sufficient subset. We analyze the dependence of the energy levels on the choice of interpolators and fit ranges for the eigenvalues. For the final result, we make a reasonable choice of interpolators and fit range and discuss the associated systematic error. For the extrapolation towards the physical pion mass we fit to the leading order chiral behavior, i.e., linear in $m_\pi^2$. The Dirac and flavor structure is motivated by the quark model \cite{Isgur:1978xj,Isgur:1978wd}, see also \cite{Glozman:1995fu}. Within the relativistic quark model there have been many determinations of the hadron spectrum, based on confining potentials and different assumptions on the hyperfine interaction (see, e.g., \cite{Capstick:1986bm,Glozman:1997ag,Loring:2001kx}). The singlet, octet and decuplet attribution \cite{Glozman:1995fu} of the states has been evaluated based on such model calculations, e.g., in \cite {Melde:2008yr} (see also the summary in \cite{Beringer:1900zz}). We use sets of up to 24 interpolating fields in each quantum channel, combining quark sources of different smearing widths, different Dirac structure and octet and decuplet flavor structure. In Appendix \ref{sec:app_interpol} (Tables \ref{tab:baryon:interpol:1} to \ref{tab:baryon:interpol:3}) we summarize the structure and numbering of the baryon interpolators used in this study. \section{Results for Light Baryons}\label{sec:baryons:light} \subsection{Nucleon} \begin{figure}[h!] \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b1_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b1_negative.eps} \caption[Energy levels for nucleon spin 1/2]{ Energy levels for nucleon spin 1/2, positive (upper) and negative parity (lower). Black, red and blue (color online) denote a value of $\beta$ equal to 4.70, 4.65 and 4.58, respectively. The solid lines give the mean values of the fits in $m_\pi^2$, the dashed ones indicate the region of one $\sigma$.} \label{fig:nucleon_1half} \end{figure} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{A50_duu_b1_vectors_negative_110000110000110000000.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{B70_duu_b1_vectors_negative_110000110000110000000.eps} \caption[Eigenvectors for nucleon spin 1/2 negative parity]{ Eigenvectors for nucleon spin 1/2 negative parity ground state and first excitation, ensemble A50 (upper) and B70 (lower). Note the different composition of the states at the different pion masses.} \label{fig:nucleon_1halfneg_vectors} \end{figure} \myparagraph{N:\;\;I(J^P)=\ensuremath{\frac{1}{2}}(\ensuremath{\frac{1}{2}}^+)} The nucleon (spin 1/2 and positive parity) ground state is the lightest baryon. We use a $n_{\text{op}}\times n_{\text{op}}$ correlation matrix with $n_{\text{op}}=6$ interpolators covering three Dirac structures and different levels of quark smearing, (1,2,9,10,19,20) (see Appendix A), and extract the four lowest eigenstates. For the ground state the leading order chiral extrapolation yields a mass value roughly 7\% larger than the experimental $N$ (see Fig.~\ref{fig:nucleon_1half}). Part of the deviation is caused by finite volume effects, which will be discussed in Section \ref{sec:results_vol}. The remaining small deviation might be caused by systematic errors from scale setting (using $r_0=0.48$ fm), or a curvature due to higher order terms in the chiral extrapolation (for a discussion on the latter, see, e.g., \cite{Bali:2012qs}). Within the basis used in the variational method, the ground state is dominated by the first Dirac structure, with a contribution of the third one (cf., Table \ref{tab:baryon:interpol:2}). We stress that all Dirac structures used here generate independent field operators which are not related by Fierz transformations. The first excitation in the nucleon channel should be the ``Roper resonance $N(1440)$'', notorious because it lies below the ground state in the corresponding negative parity channel. This ``reverse level ordering'' differs from the expectations of most simple quark models (see, e.g., \cite{Isgur:1977ef,Isgur:1978wd}). However, in our simulation, the first excitation is $\mathcal{O}$(500 MeV) higher than the experimental value. The levels are ordered conventionally with alternating parity. This is also the case in lattice simulation with quenched and dynamical results of other groups (e.g.,\cite{Cohen:2009zk,Mahbub:2009cf,Bulava:2010yg}). Towards physical pion masses, the first excitation was reported to bend down significantly \cite{Mahbub:2010rm}, however, still all lattice results are closer to the $N(1710)$ than to the Roper resonance $N(1440)$, with large error bars. At present it is unclear to us, what the reason for this behavior may be, although there are several suspects. Finite volume effects could shift the energy level up. For the ground state this shift is comparatively small (as discussed in Section \ref{sec:results_vol}). This could be significantly larger for the excited state, which is generally expected to have larger physical size. (E.g., in quark models it is considered as a radial excitation.) Unfortunately, the signal of this state is too weak in our study to allow for a reliable analysis of finite volume effects. Another interpretation may be that the used interpolators may not couple strongly enough to the Roper resonance and thus represent the physical content poorly and we might even miss the physical Roper state altogether. We observe a similar problem in the corresponding $\Lambda$ sector \cite{Engel:2012qp}. There the first observed excitation is dominated by singlet interpolators (first Dirac structure) matching the $\Lambda(1810)$ (singlet in the quark model). The Roper-like $\Lambda(1600)$ (octet in the quark model) seems to be missing. Furthermore, the energy levels of the $p$-wave scattering state $\pi N$ also could influence the situation dramatically. Inclusion of such baryon-meson interpolator may be necessary for a better representation of the physical state. The resulting energy spectrum is related to the scattering phase shift in this channel \cite{Gockeler:2012yj,Hall:2012wz}. In small boxes and for broad resonances, the resulting energy levels are shifted significantly with regard to noninteracting levels and the resonance mass has to be extracted from the phase shift data. As the experimental Roper state is broad this shift might be significant. After chiral extrapolation, we obtain two close excitations within roughly 1800-2000 MeV. One of those has a $\chi^2/$d.o.f.~of the fit of larger than three (see Table \ref{tab:chi2baryons_pospar}), which may suggests a non-linear dependence on $m_\pi^2$. However, an extrapolation using only data with pion masses below 350 MeV misses the experimental Roper resonance as well. In several of our ensembles the excited energy levels overlap with each other within error bars. At light pion masses, the first excitation is dominated by a combination of interpolators of the second Dirac structure; the second excitation is dominated by the first Dirac structure, with some contribution from the third one. Towards heavier quark masses, this level ordering interchanges. Finally, we note that the results in the nucleon positive parity channel do not deviate significantly from the corresponding quenched simulations \cite{Burch:2006cc}. \myparagraph{N:\;\;I(J^P)=\ensuremath{\frac{1}{2}}(\ensuremath{\frac{1}{2}}^-)} In general, we find somewhat low energy levels in the negative parity baryon channels, compared to experiment. This is also true for the nucleon spin 1/2 negative parity channel. We use again the set of interpolators (1,2,9,10,19,20), and find that the chiral extrapolation of the ground state comes out too low and that of the first excitation ends up near the experimental ground state mass value (see Fig.~\ref{fig:nucleon_1half}). The two lowest states are usually identified with the N(1535) and N(1650). However, in that channel the $N\pi$ state is in $s$ wave. A naive estimate of its energy (neglecting the interaction energy) at values of the pion mass above 300 MeV puts it close to the observed lowest energy level. Towards small pion masses the $N\pi$ energy level should fall more steeply than the nucleon mass towards the physical point. This suggests a (avoided) level crossing of the (negative parity) nucleon and the $N\pi$ state with related energy level shifts, when moving from larger to smaller pion energies. Indeed, our results are compatible with such a picture. In \cite{Engel:2010my} we analyzed only a subset of the configurations available in this work. There, we argued that the eigenvectors show no indication for a level crossing in the range of pion masses between roughly 300 and 600 MeV. In the present work, we can monitor the eigenvectors down to pion masses of 250 MeV. Furthermore, we use a larger basis (at the cost of introducing additional noise.) We use the same quark smearing structures for different Dirac structures, such that the eigenvectors give information about the content of the state without the need of additional normalization of the interpolators. We find indeed a significant change in the eigenvectors towards lighter pion masses. The eigenvectors are shown for ensembles A50 and B70 in Fig.~\ref{fig:nucleon_1halfneg_vectors}. In particular, the ground state is dominated by interpolator 2 ($\chi_1$) around $m_\pi=300$ MeV, and by interpolator 10 ($\chi_2$) above $m_\pi=500$ MeV. For the first excitation, interpolator 10 contributes stronger at lighter pion masses compared to heavier ones. This trend is observed also in the other ensembles and at partially quenched data. However, the picture does not clearly support an (avoided) level crossing scenario, a unique conclusion is missing. The observed behavior towards smaller quark masses was also discussed in \cite{Mahbub:2012ri} for the 2+1 flavor situation. A recent simulation including (for the first time) also $\pi N$ interpolators \cite{Lang:2012db} demonstrated significant changes on the spectrum. In that light we may interpret the states obtained in the present study (with only 3-quark interpolators), as effective superpositions of resonance and meson baryon states. We postpone further discussion of the content of the states to Section \ref{sec:results_vol}, where finite volume effects will be discussed. \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b4_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b4_negative.eps} \caption[Energy levels for nucleon spin 3/2]{ Energy levels for nucleon spin 3/2, positive (upper) and negative parity (lower).} \label{fig:nucleon_3half} \end{figure} \myparagraph{N:\;\;I(J^P)=\ensuremath{\frac{1}{2}}(\frac{3}{2}^+)} In the nucleon spin 3/2 positive parity channel, three states are known experimentally: The $N(1720)$, $N(1900)$ and $N(2040)$, where the latter needs confirmation \cite{Beringer:1900zz}. We use interpolators (1,4,5), respectively (1,2,3,4) in A66 and B70. The signal is rather noisy and the effective mass plateaus appear to fall towards large time separations. Sizable deviations from the chiral fit are observed in ensembles B70 and C77. Nevertheless, the chiral extrapolation of the ground state agrees well with the experimental $N(1720)$ (see Fig.~\ref{fig:nucleon_3half}). The first excitation overshoots the $N(1900)$ by about 2$\sigma$, which thus cannot be confirmed from this study. \myparagraph{N:\;\;I(J^P)=\ensuremath{\frac{1}{2}}(\frac{3}{2}^-)} In this channel, experimentally, $N(1520)$, $N(1700)$ and $N(1875)$ are established. Using interpolators (1,2,3,4), three states can be extracted in our simulation (see Fig.~\ref{fig:nucleon_3half}). The ground state extrapolates to a value between the $N(1520)$ and the $N(1700)$, the first and second excitation come out higher than $N(1700)$ or $N(1875)$. \subsection{Delta} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b5_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b5_negative.eps} \caption[Energy levels for $\Delta$ spin 1/2]{ Energy levels for $\Delta$ spin 1/2, positive (upper) and negative parity (lower).} \label{fig:delta_1half} \end{figure} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b2_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b2_negative.eps} \caption[Energy levels for $\Delta$ spin 3/2]{ Energy levels for $\Delta$ spin 3/2, positive (upper) and negative parity (lower).} \label{fig:delta_3half} \end{figure} \myparagraph{\Delta :\;\;I(J^P)=\frac{3}{2}(\ensuremath{\frac{1}{2}}^+)} Experimentally, the ground state $\Delta(1750)$ still needs confirmation, while $\Delta(1910)$ is well established. In our simulation, using interpolators (1,4,5), we find two states, where the second eigenvalue decreases slower with the pion mass than the first one. The resulting crossing of the eigenvalues complicates the analysis and one has to follow the eigenvector composition in order to properly assign the state. However, the plateaus can be fitted and energy levels extracted, albeit with sizable error bars. The chiral extrapolation of the ground state is compatible with both $\Delta(1750)$ and $\Delta(1910)$ within the error bars, the first excitation comes out higher (see Fig.~\ref{fig:delta_1half}). \myparagraph{\Delta :\;\;I(J^P)=\frac{3}{2}(\ensuremath{\frac{1}{2}}^-)} In the negative parity channel, $\Delta(1620)$ is established, while $\Delta(1900)$ needs confirmation. Using interpolators (1,2,3,4) we extract two states in this channel. The chiral extrapolation of the ground state hits the experimental $\Delta(1620)$ within $1.2 \sigma$ (see Fig.~\ref{fig:delta_1half}). The excitation extrapolates to the $\Delta(1900)$, however, with a large associated error. \myparagraph{\Delta :\;\;I(J^P)=\frac{3}{2}(\frac{3}{2}^+)} The $\Delta(1232)$ is the lowest resonance of all spin 3/2 baryons. We find a good signal of two states, the chiral extrapolations of both come out too high compared to the experimental $\Delta(1232)$ and the $\Delta(1600)$ (see Fig.~\ref{fig:delta_3half}). Finite volume effects are a possible origin of the discrepancy, as will be discussed in Section \ref{sec:results_vol}. A possible $p$-wave energy of a coupling $N\pi$ state would lie between the two observed levels and is not seen. Note that the partially quenched data of this channel are used to set the strange quark mass parameter \cite{Engel:2011aa}. \myparagraph{\Delta :\;\;I(J^P)=\frac{3}{2}(\frac{3}{2}^-)} We find a good signal in the $J^P=3/2^-$ $\Delta$ channel in all seven ensembles (see Fig.~\ref{fig:delta_3half}). However, like in other negative parity baryon channels, the chiral extrapolation of the ground state comes out rather low compared to experiment. The results for the first excitation are inconclusive, the $\chi^2/$d.o.f.~ of the chiral extrapolation fit is larger than three. \section{Results for Strange Baryons}\label{sec:baryons:strange} \subsection{Lambda} $\Lambda$ baryons come as flavor singlets or octets, or as mixtures of them. Lattice simulations in this channel are of particular interest, as for years no state was observed in the vicinity of the prominent low-lying $\Lambda(1405)$ (see, e.g., \cite{Takahashi:2009ik,Takahashi:2009bu}). Only recent results show a level ordering compatible with experiment \cite{Menadue:2011pd}. Our results for the $\Lambda$ baryons have been discussed elsewhere \cite{Engel:2012qp}. Here a few observations are summarized for completeness. We include interpolators of flavor singlet and octet type and three Dirac structures in all four $J^P=\frac{1}{2}^\pm$ and $\frac{3}{2}^\pm$ channels. In both 1/2 channels and in the $\frac{3}{2}^+$ channel we find ground states extrapolating to the experimental values, whereas the $\frac{3}{2}^-$ ground state comes out too high. We confirm the $\Lambda(1405)$ and also find two low-lying excitations in the $\ensuremath{\frac{1}{2}}^-$ channel. Our results suggest that the $\Lambda(1405)$ is dominated by flavor singlet 3-quark content, but at $m_\pi\approx 255$ MeV octet interpolators contribute roughly 15-20\%, which may increase towards physical pion masses. The Roper-like (octet) state $\Lambda(1600)$ may couple too weak to our 3-quark interpolator basis. We analyze the volume dependence and find that only the spin $\ensuremath{\frac{1}{2}}^+$ ground state shows a clear exponential dependence as expected for bound states. For all other discussed states, the volume dependence is either fairly flat or obscured by the statistical error. \subsection{Sigma} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_suu_b7_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_suu_b7_negative.eps} \caption[Energy levels for $\Sigma$ spin 1/2]{ Energy levels for $\Sigma$ spin 1/2, positive (upper) and negative parity (lower).} \label{fig:sigma_1half} \end{figure} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{B70_suu_b7_neg_vectors_110000001100000000000000110000000000_3_4.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{B70_suu_b7_neg_vectors_110000001100000000000000110000000000_1_2.eps} \caption[Eigenvectors for $\Sigma$ spin 1/2 negative parity]{ Eigenvectors for $\Sigma$ spin 1/2 negative parity ground state and first excitation (upper) and second and third excitation (lower) for ensemble B70. Note the dominance of decuplet interpolators for the second excitation, which is a low lying state (see Fig.~\ref{fig:sigma_1half}). Details are discussed in the text. } \label{fig:sigma_1halfneg_vectors} \end{figure} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_suu_b8_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_suu_b8_negative.eps} \caption[Energy levels for $\Sigma$ spin 3/2]{ Energy levels for $\Sigma$ spin 3/2, positive (upper) and negative parity (lower).} \label{fig:sigma_3half} \end{figure} \myparagraph{\Sigma: \;\;I(J^P)=1(\ensuremath{\frac{1}{2}}^+)} The $\Sigma$(1189) ground state marks one of the lowest energy levels of the spin 1/2 baryons. At the $SU(3)$ flavor symmetric point, the octet and decuplet irreducible representations are orthogonal. Towards physical quark masses, $SU(3)_f$ is broken and hence octet and decuplet are allowed to mix. We use the set (1,2,9,10,25,26), which includes octet interpolators with Dirac structures $\chi_1$ and $\chi_2$ and decuplet interpolators in the basis. We use the four lowest levels for our analysis. The eigenvalues for three ensembles are shown in Fig.~\ref{fig:sigma_1half_eigvals}. The ground state signal is fairly good and the chiral extrapolation results in a value close to the experimental $\Sigma(1189)$ (see Fig.~\ref{fig:sigma_1half}). The first excitation comes out too high compared to the experimental $\Sigma(1660)$. Note the poor $\chi^2/$d.o.f.~of the corresponding chiral extrapolation, with a value larger than four (see Table \ref{tab:chi2baryons_pospar}). The energy levels of the second and third excitations appear close to the first excitation in our simulations. Monitoring the eigenvectors, we analyze the octet/decuplet content of the states. Within the finite basis employed, the ground state and the first excitation are strongly dominated by octet $\chi_1$. Of the second and third excitation, one is dominated by decuplet and the other by octet $\chi_2$ interpolators. The mixing of octet and decuplet interpolators is found to be negligible in the range of pion masses considered. As we will see, this holds for most $\Sigma$ and $\Xi$ channels discussed here. \myparagraph{\Sigma: \;\;I(J^P)=1(\ensuremath{\frac{1}{2}}^-)} In the $\Sigma$ spin $1/2$ negative parity channel, the Particle Data Group \cite{Beringer:1900zz} lists two low nearby states, $\Sigma(1620)$ and $\Sigma(1750)$, and one higher lying resonance, the $\Sigma(2000)$. Of those, only $\Sigma(1750)$ is established. Again the set of interpolators (1,2,9,10,25,26) is used to extract four lowest states from our simulations. We find three low nearby states, all of which extrapolate close to the experimental $\Sigma(1620)$ and $\Sigma(1750)$ (see Fig.~\ref{fig:sigma_1half}). Hence, our results confirm the $\Sigma(1620)$ and $\Sigma(1750)$ and even might suggest the existence of a third low lying resonance. However, as discussed for the N$(\ensuremath{\frac{1}{2}}^-)$ (and like in the case of the $\Lambda(\ensuremath{\frac{1}{2}}^-)$) there are several $s$ wave baryon-meson channels ($N\overline K$, $\Lambda \pi$, $\Sigma \pi$), which, for our values of the pion mass, have energies close to the ground state. We cannot exclude such contributions, although we did not include them in the interpolators. The eigenvectors of all four states are shown for ensemble B70 in Fig.~\ref{fig:sigma_1halfneg_vectors}. Within the employed basis, the ground state is dominated by octet $\chi_2$, the first excitation by octet $\chi_1$, the second excitation by decuplet and the third excitation again by octet $\chi_1$ interpolators. We want to emphasize the existence of a low lying state in this channel which is dominated by decuplet interpolators. This result also agrees with a recent quark model calculation \cite{Melde:2008yr}. Again, the mixing of octet and decuplet interpolators appears to be negligible in the range of pion masses considered. \myparagraph{\Sigma: \;\;I(J^P)=1(\frac{3}{2}^+)} The Particle Data Group lists $\Sigma(1385)$, $\Sigma(1840)$ and $\Sigma(2080)$, where only the lightest is established. We use interpolators (2,3,10,11,12) and extract four energy levels (see Fig.~\ref{fig:sigma_3half}). The chiral extrapolations come out high compared to the experimental values. From the eigenvectors we find that the lowest two states are strongly dominated by decuplet, the second excitation by octet and the third excitation again by decuplet interpolators. \myparagraph{\Sigma: \;\;I(J^P)=1(\frac{3}{2}^-)} In this channel, three states are known experimentally: $\Sigma(1580)$, $\Sigma(1670)$ and $\Sigma(1940)$, where the lightest one needs confirmation. Using interpolators (2,3,10,11,12) we can extract four states. We find two low lying states and two higher excitations (see Fig.~\ref{fig:sigma_3half}). In general, the corresponding energy levels are high compared to experiment, thus not confirming the $\Sigma(1580)$. However, the mixing of octet and decuplet might increase towards light pion masses, complicating the chiral behavior. Analyzing the eigenvectors, we find that of the two low lying states, one is dominated by octet and the other one by decuplet interpolators. Of the third and fourth state, one is dominated by octet and the other by decuplet interpolators. Compared to the other $\Sigma$ channels, there appears a measurable mixing of octet and decuplet interpolators. We remark the importance of decuplet interpolators for low-lying states in this channel. \subsection{Xi} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_uss_b7_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_uss_b7_negative.eps}\\ \caption[Energy levels for $\Xi$ spin 1/2]{ Energy levels for $\Xi$ spin 1/2, positive (upper) and negative parity (lower).} \label{fig:xi_1half} \end{figure} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_uss_b8_positive.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_uss_b8_negative.eps}\\ \caption[Energy levels for $\Xi$ spin 3/2]{ Energy levels for $\Xi$ spin 3/2, positive (upper) and negative parity (lower).} \label{fig:xi_3half} \end{figure} \myparagraph{\Xi: \;\; I(J^P)=\ensuremath{\frac{1}{2}}(\ensuremath{\frac{1}{2}}^+)} Experimentally, only one resonance $\Xi(1322)$ is known in the $\Xi$ spin 1/2 positive parity channel. We use interpolators (1,2,9,10,25,26) and extract the four lowest states. The ground state shows a fairly clean signal and its chiral extrapolation agrees nicely with the $\Xi(1322)$ (see Fig.~\ref{fig:xi_1half}). The three excitations come out much higher and the results at the lightest pion mass may suggest a significant chiral curvature towards physical pion masses. This is also expressed in the poor $\chi^2/$d.o.f., which is above five for the first excitation (see Table \ref{tab:chi2baryons_pospar}). Analyzing the eigenvectors, we find that -- within the finite basis used -- the ground state and the first excitation are strongly dominated by octet $\chi_1$. Of the third and the fourth excitation, one is dominated by decuplet and the other one by octet $\chi_2$ interpolators. The mixing of octet and decuplet interpolators is found negligible in the range of simulated pion masses. \myparagraph{\Xi: \;\; I(J^P)=\ensuremath{\frac{1}{2}}(\ensuremath{\frac{1}{2}}^-)} No state is known in the $\Xi$ spin $1/2^-$ channel experimentally, and no low-lying state identified in quark model calculations like, e.g., \cite{Melde:2008yr}. Nevertheless, using interpolators (1,2,9,10,25,26), we identify four states in our simulations (see Fig.~\ref{fig:xi_1half}). Of those, three are low lying and extrapolate to 1.7-1.9 GeV. Note the poor $\chi^2/$d.o.f.~larger than three of the corresponding three chiral extrapolations. The fourth state appears rather high at 2.7-2.9 GeV, but its extrapolation shows a nice $\chi^2/$d.o.f.~of order one. From the eigenvectors we find that the ground state is dominated by octet $\chi_2$, the first excitation by octet $\chi_1$, the second excitation by decuplet and the third excitation again by octet $\chi_1$ interpolators. We emphasize the existence of a low lying state in this channel which is dominated by decuplet interpolators, analogous to the $\Sigma$ spin $1/2$ negative parity channel. \myparagraph{\Xi: \;\;I(J^P)=\ensuremath{\frac{1}{2}}(\frac{3}{2}^+)} In this channel, one state, $\Xi(1530)$, is experimentally known and well established. We use interpolators (2,3,10,11,12) to extract four states from our simulation. All four states show a stable signal and the ground state energy level nicely extrapolates to the experimental $\Xi(1530)$ (see Fig.~\ref{fig:xi_3half}). The second and third energy levels appear rather close to each other and are compatible with a level crossing picture within pion masses of 300-500 MeV. Within the finite basis used, the ground state is dominated by decuplet interpolators, which agrees with quark model calculations. At light pion masses, the first excitation is dominated by octet and the second by decuplet interpolators. The third excitation is again dominated by decuplet interpolators. \myparagraph{\Xi: \;\;I(J^P)=\ensuremath{\frac{1}{2}}(\frac{3}{2}^-)} The Particle Data Group \cite{Beringer:1900zz} lists one (established) state, $\Xi(1820)$, which is expected to be dominated by octet interpolators according to quark model calculations \cite{Beringer:1900zz}. Using interpolators (2,3,10,11,12), we extract four energy levels in this channel. We find two low lying states, the energy levels of which extrapolate close to the experimental $\Xi(1820)$ (see Fig.~\ref{fig:xi_3half}). Analyzing the eigenvectors, we find that of the two low lying states, one is dominated by octet and the other one by decuplet interpolators. The third state is dominated by octet and the fourth state by decuplet interpolators. Compared to the other $\Xi$ channels, there appears a small but measurable mixing of octet and decuplet interpolators. \subsection{Omega} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b5_positive_omega.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b5_negative_omega.eps}\\ \caption[Energy levels for $\Omega$ spin 1/2]{ Energy levels for $\Omega$ spin 1/2, positive (upper) and negative parity (lower).} \label{fig:omega_1half} \end{figure} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b2_positive_omega.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{fit_mass_duu_b2_negative_omega.eps}\\ \caption[Energy levels for $\Omega$ spin 3/2]{ Energy levels for $\Omega$ spin 3/2, positive (upper) and negative parity (lower).} \label{fig:omega_3half} \end{figure} \myparagraph{\Omega: \;\;I(J^P)=0(\ensuremath{\frac{1}{2}}^+)} Experimentally, the $\Omega$ baryons have been investigated little. No state is known in the $J^P=1/2^+$ channel. Using the same interpolators as in the corresponding $\Delta$ channel, we find two states, whose energy levels are close for all simulated pion masses (see Fig.~\ref{fig:omega_1half}). Both predicted resonances lie between 2.3 and 2.6 GeV. \myparagraph{\Omega: \;\;I(J^P)=0(\ensuremath{\frac{1}{2}}^-)} Again, there is no experimental experience in the $J^P=\ensuremath{\frac{1}{2}}^-$ channel of the $\Omega$ baryons. We extract two states, where the excitation comes with some noise. The chiral extrapolation of the ground state predicts a resonance around 2 GeV (see Fig.~\ref{fig:omega_1half}). Note the corresponding poor $\chi^2/$d.o.f. larger than four (see Table \ref{tab:chi2baryons_negpar}); its main contribution comes from the light energy level of one ensemble (C72). Since this behavior is not systematically observed in other channels, we assume the deviation to be due to statistical fluctuations. \myparagraph{\Omega: \;\;I(J^P)=0(\frac{3}{2}^+)} The $\Omega(1672)$ in the $J^P=3/2^+$ channel is known experimentally to very high accuracy. This is one of the reasons why this state is often used to define the strange quark mass parameters. This approach is pursued also in our setup. The determination of the parameters has been performed along a different scheme of scale setting compared to \ref{Engel:2011aa}. The Sommer parameter was identified with the experimental value for each ensemble, without extrapolation to physical pion masses. In that scheme the lowest energy level in the $\Omega$ $J^P=3/2^+$ channel was identified with the experimental $\Omega(1672)$ for each ensemble. This identification used preliminary data on the $16^3\times32$ lattices only. Here we present results relying on another scheme of scale setting \cite{Engel:2011aa}. Thus, the results shown here for the ground state serve as an additional cross check for the final setup of the simulation. The ground state energy level extrapolates close to the experimental $\Omega(1672)$, undershooting it slightly (see Figure \ref{fig:omega_3half}). The corresponding $\chi^2/$d.o.f.~is around two (see Table \ref{tab:chi2baryons_pospar}), half of it contributed by ensemble A66. Using our final dataset and revisiting the tuning, we find that the strange quark mass of ensemble A66 is slightly too light while the mass from ensemble C64 is slightly too heavy. This creates a slope in the chiral extrapolation which causes the $\Omega(1672)$ (and to a lesser extent all baryons involving one or more strange quarks) to be lighter than a proper tuning would imply. A thorough discussion is difficult since also other systematics enter. We will provide some further discussion, also considering finite volume effects, in Section \ref{sec:results_vol}. \myparagraph{\Omega: \;\;I(J^P)=0(\frac{3}{2}^-)} In the $J^P=3/2^-$ channel of the $\Omega$ baryons there is no experimental evidence. We find two states, both with a fairly good signal, in our simulations. The chiral extrapolation of the ground state energy level predicts a resonance slightly above 2 GeV (see Figure \ref{fig:omega_3half}). \section{Volume Dependence of Baryon Energy Levels}\label{sec:results_vol} For resonance states in large volumes, there are two leading mechanisms of finite volume effects. For one, the spectral density of scattering states depends on the volume and distorts the energy spectrum through avoided level crossings. This mechanism is very important for the determination of resonance properties \cite{Luscher:1986pf,Luscher:1991cf}. The expected distortion from this effect is of $\mathcal O(\Gamma)$, where $\Gamma$ is the width of the resonance. Notice that the resonance width is expected to be quite a bit smaller than the physical one at unphysical pion masses. This justifies identifying the pattern of energy levels qualitatively with the spectrum of resonances. Therefore, this kind of finite volume effect is discussed only qualitatively for particular observables. A second volume effect comes from virtual pion exchange with the mirror image. The so-called ``pion wrapping around the universe'' causes an exponential correction to the energy level of the hadron \cite{Luscher:1985dn}. This mechanism can be discussed to higher orders in Chiral Perturbation Theory \cite{Colangelo:2005gd,Colangelo:2005cg,Meissner:2005ba}. Here we follow a fit form successfully applied in \cite{Durr:2008zz}, \begin{equation} E_h(L) = E_h(L=\infty) + c_h(m_\pi) \mathrm{e}^{-m_\pi L} (m_\pi L)^{-3/2} \;, \label{eq:vol} \end{equation} where $E_h$ is the energy level of the hadron at linear size $L$ of the lattice. It was suggested that $c_h(m_\pi)=c_{h,0} m_\pi^2$, which implies two fit parameters for each observable: $E_h(L=\infty)$ and $c_{h,0}$. The parameter $c_{h,0}$ is shared among different ensembles, which we exploit to make combined fits. We remark that the fit form used is a fairly simple one, however, considering the small number of different volumes, we have to rely on a method which uses few parameters. Due to the exponential behavior, finite volume effects are expected to become non-negligible for $m_\pi L\lesssim 4$. This region is entered in particular for the ensembles with small pion masses. Eq.~\eq{eq:vol} is valid only for asymptotically large volumes, power-like corrections are expected for $m_\pi L\lesssim 3$ and already earlier for higher excitations. For ensemble C77 ($m_\pi=330$ MeV) we generated data on configuration size $12^3\times 24$, $16^3\times 32$, and $24^3\times 48$, for ensemble A66 ($m_\pi=255$ MeV) we have data for size $16^3\times 32$ and $24^3\times 48$. All these ensembles, $2.7<m_\pi L<6$, where the pion cloud exchange should have a measurable effect described by Eq.~\eq{eq:vol}. We apply Eq.~\eq{eq:vol} separately to each observable. The data of sets A66 and C77 enter a combined fit, and the resulting parameters are used to extrapolate the data of all ensembles (for that observable) to infinite volume. Finally, the results are extrapolated to the physical light-quark mass. We focus on narrow or stable states with a good signal where clear finite volume effects can be expected. This is the case in particular for the ground states of the positive parity baryon channels. As mentioned in the previous section, the results for strange baryons are affected by our imperfect strange quark tuning. The tuning is of acceptable quality for 5 out of the 7 ensembles of size $16^3\times32$. We therefore omit the data from C64 and A66 for our final chiral fits for baryons with strangeness. As our tuning was done in finite volume the resulting value for the $\Omega(1672)$ will still deviate from the physical value. Assuming a simple dependence on the number of strange quarks, this deviation can be translated to other states and we provide this simple estimate as a second uncertainty when citing final values for the baryon masses. These values are also listed in Table \ref{tab:chi2_vol}. \subsection{Nucleon}\label{sec:results_vol:baryons:nucleon} \myparagraph{N:\;\;I(J^P)=\ensuremath{\frac{1}{2}}(\ensuremath{\frac{1}{2}}^+)} \begin{figure}[tb] \centering \includegraphics[width=\mylenC,clip]{systematic_error_duu_b1_positive.AC.eps} \caption[Systematic error of the nucleon mass] {Systematic error of the nucleon ground state energy level. The levels are shown for different choices of interpolators and fit ranges, labeled on the horizontal axis. E.g., ``A4'' denotes the set of interpolator ``A'' and a fit range for the eigenvalues from $t=4a$ to the onset of noise. ``A'' denotes set of interpolators (1,2,9,10,19,20), ``B'' denotes (3,4,11,12,19,20). For each set of interpolator and fit range, results for small to large lattices (spatial size 16, 24 for ensemble A66, and 12,16, 24 for C77) are shown from left to right, the corresponding infinite volume limit rightmost. } \label{fig:nucleon_1half_pospar_vol_syserr} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\mylenC,clip]{volume_duu_b1_positive.A.eps}\\ \vspace{5mm} \includegraphics[width=\mylenC,clip]{volume_duu_b1_positive.C.eps} \caption[Volume dependence of the nucleon mass] {Volume dependence of the nucleon mass for the set of interpolators (1,2,9,10,19,20) and $t_{\text{min}}=5a$ ((A,5) of Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}). } \label{fig:nucleon_1half_pospar_vol} \end{figure} \begin{figure}[htb] \includegraphics[width=\mylenC,clip]{fit_mass_duu_b1_positive.infvol.eps}\\ \includegraphics[width=\mylenC,clip]{fit_mass_duu_b2_positive.infvol.eps} \caption[Energy levels for the $N\,\ensuremath{\frac{1}{2}}^+$ and $\Delta\,\frac{3}{2}^+$ in the infinite volume limit] {Energy levels for the nucleon spin $\ensuremath{\frac{1}{2}}^+$ (upper) and $\Delta$ spin $\frac{3}{2}^+$ (lower) in the infinite volume limit. After infinite volume extrapolation ((A,5) of Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr} resp.~Fig.~\ref{fig:delta_3half_pospar_vol_syserr}), we extrapolate to physical pion masses. We obtain $m_N$=954(16) MeV and $m_\Delta$=1268(32) MeV, which both match the experimental values within roughly 1$\sigma$. } \label{fig:nucleondelta_pospar_infvol} \end{figure} The nucleon spin $1/2^+$ ground state shows a very clean signal. Our result for the finite box of roughly 2.2~fm deviates significantly from experiment (see Fig.~\ref{fig:nucleon_1half}). In order to estimate the systematic error we compare two sets of interpolators A=(1,2,9,10,19,20) and B=(3,4,10,11,19,20). Furthermore, we consider different starting values for the fit range for the eigenvalues. The results for the different ensembles and the corresponding infinite volume extrapolations are shown in Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}. Note that the result for (B,7) of ensemble A66 lies outside the plotted region. We conclude that for small volumes late starts of the fit have to be avoided. We find a clear dependence of the nucleon energy level on the lattice volume. For definiteness, we choose the set of interpolators A and $t_{\text{min}}=5a$ and the corresponding infinite volume extrapolation, which is shown in Fig.~\ref{fig:nucleon_1half_pospar_vol}. After infinite volume extrapolation of all ensembles with the extrapolation parameters determined from A66 and C77, we extrapolate to the physical pion mass, shown in Fig.~\ref{fig:nucleondelta_pospar_infvol} (upper). Our final result is $m_N=954(16)$ MeV (error is statistical only), which agrees with the experimental $N(939)$ within 1$\sigma$. \myparagraph{N:\;\;I(J^P)=\ensuremath{\frac{1}{2}}(\ensuremath{\frac{1}{2}}^-)} \begin{figure}[tb] \centering \includegraphics[width=\mylenC,clip]{systematic_error_duu_b1_negative.AC.eps} \caption[Systematic error of the nucleon $1/2^-$ ground state mass] {Systematic error of the nucleon spin $1/2^-$ ground state mass, analogous to Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}. ``A'' denotes set of interpolators (5,11,17), ``B'' denotes (1,2,9,10,17,18). For each set of interpolator and fit range, results for small to large lattices are shown from left to right, the corresponding infinite volume limit rightmost. } \label{fig:nucleon_1half_negpar_vol_syserr} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\mylenC,clip]{systematic_error_duu_b1_negative.AC.1E.eps} \caption[Systematic error of the nucleon $1/2^-$ first excited mass] {Systematic error of the nucleon spin $1/2^-$ first excited energy level, analogous to Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}. ``A'' denotes set of interpolators (5,11,17), ``B'' denotes (1,2,9,10,17,18). For each set of interpolator and fit range, results for small to large lattices are shown from left to right, the corresponding infinite volume limit rightmost. } \label{fig:nucleon_1half_negpar_1E_vol_syserr} \end{figure} In the nucleon spin $1/2^-$ channel we analyze the finite volume effects of the two lowest energy levels. Our results for the finite box of roughly 2.2 fm are a bit low compared to experiment (see Fig.~\ref{fig:nucleon_1half}). We show results for different volumes and infinite volume extrapolations for the ground state in Fig.~\ref{fig:nucleon_1half_negpar_vol_syserr} and for the first excitation in Fig.~\ref{fig:nucleon_1half_negpar_1E_vol_syserr}. Note that in some cases the data suggest negative finite volume corrections to the energy level. Such are compatible with an attractive $s$ wave scattering state $\pi N$. However, the pattern is not systematically observed in A66 and C77, neither with nor without assuming a level crossing (with changing pion mass). Hence the finite volume analysis does not provide clear information on the particle content of the two lowest energy levels in the nucleon spin $1/2^-$ channel. In fact, as has been shown recently in a study which includes meson-baryon interpolators \cite{Lang:2012db}, the spectrum should exhibit a sub-threshold energy level in addition to two levels close the the resonance position. Comparison of these results with the energy levels obtained here leads one to interpret the present eigenstates as superpositions of those states. \subsection{Delta Baryons}\label{sec:results_vol:baryons:delta} \begin{figure}[htb] \centering \includegraphics[width=\mylenC,clip]{systematic_error_duu_b2_positive.AC.eps} \caption[Systematic error of the $\Delta\,3/2^+$ mass] {Systematic error of the $\Delta$ spin $3/2^+$ mass, analogous to Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}. ``A'' denotes set of interpolators (1,4,5), ``B'' denotes (1,5,8). For each set of interpolator and fit range, results for small to large lattices are shown from left to right, the corresponding infinite volume limit rightmost. } \label{fig:delta_3half_pospar_vol_syserr} \end{figure} We show results and infinite volume extrapolations for different sets of interpolators and different fit ranges for the $\Delta$ spin $3/2^+$ ground state in Fig.~\ref{fig:delta_3half_pospar_vol_syserr}. Compared to the nucleon, the fit ranges of the eigenvalues are short, correspondingly, and the results tend to fluctuate a bit more. The volume dependence appears to be the strongest of all observables considered. For definiteness, we choose the set of interpolators A and $t_{\text{min}}=5\,a$ and the corresponding infinite volume extrapolation, and note that the systematic error is of the order of the statistical error, or slightly larger. After infinite volume extrapolation of all ensembles, we extrapolate to the physical pion mass as shown in Fig.~\ref{fig:nucleondelta_pospar_infvol}. Our final result is $m_\Delta=1268(32)$ MeV, which agrees with the experimental $\Delta(1232)$ within roughly 1$\sigma$. We remark that the energy level in ensemble A66 appears low compared to other ensembles. This degrades the $\chi^2/$d.o.f.~of the chiral fit (see Table \ref{tab:chi2_vol}), but improves the comparison with experiment. \subsection{Omega Baryons}\label{sec:results_vol:baryons:omega} \begin{figure}[htb] \centering \includegraphics[width=\mylenC,clip]{systematic_error_duu_b2_pos.omega.AC.eps} \caption[Systematic error of the $\Omega$ $3/2^+$ mass] {Systematic error of the $\Omega$ spin $3/2^+$ mass, analogous to Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}. ``A'' denotes set of interpolators (1,5,8), ``B'' denotes (1,3,4). For each set of interpolator and fit range, results for small to large lattices are shown from left to right, the corresponding infinite volume limit rightmost. For definiteness we choose (B,4). } \label{fig:omega_3half_pospar_vol_syserr} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\mylenC,clip]{fit_mass_duu_b2_positive.omega.infvol.eps} \caption[Energy levels for $\Omega$ in the infinite volume limit] {Energy levels for $\Omega$ spin $3/2^+$ in the infinite volume limit. After infinite volume extrapolation we extrapolate to physical pion masses, obtaining $m_\Omega$=1650(20) MeV. For discussion please refer to the text. } \label{fig:omega_3half_pospar_infvol} \end{figure} The $\Omega$ mass was used in the first place to define the strange quark mass parameter. We consider different sets of interpolators and fit ranges of the eigenvalues in order to estimate the corresponding systematic error. Figure \ref{fig:omega_3half_pospar_vol_syserr} shows some of the corresponding results. Here, we choose for definiteness interpolators (1,3,4) and a fit range starting from $t_{\text{min}}=4a$ for the ensembles with letter C and $t_{\text{min}}=6a$ for the ensembles with letter A; we note that the corresponding systematic error appears to be somewhat smaller than the statistical one. We extrapolate the energy levels of all ensembles to infinite volume. In the final extrapolation to physical light-quark masses (see Fig.~\ref{fig:omega_3half_pospar_infvol}), we omit ensemble A66 and C64, because they show a slight mistuning in the strange quark mass. This strategy is also pursued for other strange baryons in the infinite volume limit. We obtain $m_\Omega=1650(20)$ MeV, which agrees with the experimental $\Omega(1672)$ within 1.1 $\sigma$. The slight deviation originates from the quark mass tuning in finite volume using only partial statistics. \begin{figure}[htb] \centering \includegraphics[width=\mylenC,clip]{fit_mass_suu_uss_b7_positive.infvol.eps} \caption[Energy levels for $\Sigma$ $\ensuremath{\frac{1}{2}}^+$ (upper pane) and $\Xi$ $\ensuremath{\frac{1}{2}}^+$ (lower pane) in the infinite volume limit] {Energy levels for the $\Sigma$ spin $\ensuremath{\frac{1}{2}}^+$ (upper pane) and the $\Xi$ spin $\ensuremath{\frac{1}{2}}^+$ (lower pane) ground states in the infinite volume limit. After infinite volume extrapolation we extrapolate to physical pion masses. We obtain $m_\Sigma$=1176(19)(+07)~MeV and $m_\Xi=$1299(16)(+15)~MeV. } \label{fig:sigmaxi_1half_pospar_infvol} \end{figure} \subsection{Sigma Baryons}\label{sec:results_vol:baryons:sigma} \begin{figure}[htb] \centering \includegraphics[width=\mylenC,clip]{systematic_error_suu_b7_positive.AC.eps} \caption[Systematic error of the $\Sigma$ $1/2^+$ mass] {Systematic error of the $\Sigma$ spin $\ensuremath{\frac{1}{2}}^+$ mass, analogous to Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}. ``A'' denotes set of interpolators (1,2,9,10,25,26), ``B'' denotes (2,3,10,11,19,20,26,27). For each set of interpolator and fit range, results for small to large lattices are shown from left to right, the corresponding infinite volume limit rightmost. } \label{fig:sigma_1half_pospar_vol_syserr} \end{figure} In the $\Sigma$ spin $1/2^+$ channel we apply the sets of interpolators A=(1,2,9,10,25,26) and B=(2,3,10,11,19,20,26,27) and different fit ranges to discuss the volume dependence of the ground state (see Fig.~\ref{fig:sigma_1half_pospar_vol_syserr}). The volume dependence is found to be comparable in size to the one of the nucleon ground state energy level. Towards larger fit ranges the results start to scatter; nevertheless, they are conclusive and the systematic error is of the order of the statistical one. We choose interpolators A and $t_{\text{min}}=6a$, and show the results in the infinite volume limit in the upper pane of Fig.~ \ref{fig:sigmaxi_1half_pospar_infvol}. Our final result is $m_\Sigma=1176(19)(+07)$ MeV (second error is a correction estimate based on the slight mistuning of the strange quark mass), which is compatible with the experimental $\Sigma$ around 1193 MeV. In the $\Sigma$ spin $3/2^+$ channel we again use interpolators (2,3,10,11,12). The results are shown in the upper pane of Fig.~\ref{fig:sigmaxi_3half_pospar_infvol}. Here our final result is $m_{\Sigma}=1431(25)(+07)$MeV which is somewhat larger than the experimental value of 1384 MeV. \subsection{Xi Baryons}\label{sec:results_vol:baryons:xi} \begin{figure}[htb] \centering \includegraphics[width=\mylenC,clip]{systematic_error_uss_b7_positive.AC.eps} \caption[Systematic error of the $\Xi$ $1/2^+$ mass] {Systematic error of the $\Xi$ spin $1/2^+$mass, analogous to Fig.~\ref{fig:nucleon_1half_pospar_vol_syserr}. ``A'' denotes set of interpolators (1,2,9,10,25,26), ``B'' denotes (2,3,10,11,19,20,26,27). For each set of interpolator and fit range, results for small to large lattices are shown from left to right, the corresponding infinite volume limit rightmost. } \label{fig:xi_1half_pospar_vol_syserr} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\mylenC,clip]{fit_mass_suu_uss_b8_positive.infvol.eps} \caption[Energy levels for $\Sigma$ $3/2^+$ (upper pane) and $\Xi$ $3/2^+$ (lower pane) in the infinite volume limit] {Energy levels for $\Sigma$ spin $3/2^+$ (upper pane) and $\Xi$ spin $3/2^+$ (lower pane) ground states in the infinite volume limit. After infinite volume extrapolation we extrapolate to physical pion masses. We obtain $m_{\Sigma}$=1431(25)(+07) MeV and $m_{\Xi}$=1540(22)(+15) MeV. } \label{fig:sigmaxi_3half_pospar_infvol} \end{figure} We consider the sets of interpolators A=(1,2,9,10,25,26) and B=(2,3,10,11,19,20,26,27) and different fit ranges to discuss the volume dependence of the $\Xi$ spin $1/2^+$ ground state (see Fig.~\ref{fig:xi_1half_pospar_vol_syserr}). Again, the results are conclusive, and the systematic error is well bounded. We choose interpolators A and $t_{\text{min}}=6a$, and show the results for infinite volume in the upper pane of Fig.~\ref{fig:sigmaxi_3half_pospar_infvol}. Our final result is $m_\Xi=1299(16)(+15)$ MeV which is again slightly lower than the experimental $\Xi$ around 1317 MeV. For the $\Xi$ spin $3/2^+$ ground state we use interpolators (2,3,10,11,12). The infinite volume results are shown in the right pane of Fig.~\ref{fig:sigmaxi_3half_pospar_infvol}. Our result is $m_{\Xi}=1540(22)(+15)$MeV which is slightly larger than the experimental value 1532MeV. \section{Summary\label{summary}} \begin{figure} \noindent\includegraphics[width=\columnwidth,clip]{collection_baryons_pospar.eps}\\ \noindent\includegraphics[width=\columnwidth,clip]{collection_baryons_negpar.eps} \caption[Energy levels for baryons: Summary]{ Energy levels for positive parity (top) and negative parity baryons (bottom). All values are obtained by chiral extrapolation linear in the pion mass squared. Horizontal lines or boxes represent experimentally known states, dashed lines indicate poor evidence, according to \cite{Beringer:1900zz}. The statistical uncertainty of our results is indicated by bands of 1$\sigma$, that of the experimental values by boxes of 1$\sigma$. The strange quarks are implemented in valence approximation. Grey symbols denote a poor $\chi^2$/d.o.f.~of the chiral fits (see Tables \ref{tab:chi2baryons_pospar} and \ref{tab:chi2baryons_negpar}). } \label{fig:baryons_summary} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth,clip]{collection_baryons.infvol.eps} \caption[Energy levels in infinite volume limit: Summary] {Energy levels of baryons in the infinite volume limit at physical pion mass. Horizontal lines and boxes represent experimentally known states \cite{Beringer:1900zz}. The statistical uncertainty of our results is indicated by bands of 1$\sigma$. } \label{fig:infvol_summary} \end{figure} We have derived results for the low lying energy levels in all baryon channels (spin 1/2 and 3/2, both parities) for baryons with light and strange valence quark content. The light quarks were included as dynamical quarks in the generation of gauge configurations by the Hybrid Monte Carlo method. The quarks were implemented as Chirally Improved quarks, the pion masses range from 255 to 596 MeV. Figure \ref{fig:baryons_summary} shows our results for the extrapolation (leading order Chiral Perturbation Theory linear in $m_\pi^2$) of the finite volume energy levels to physical pion mass. We find good agreement of the ground state energy levels with the experimental values, where available. In some cases (e.g., in the $\Omega$ and the $\Xi$ sectors) our results suggest the existence of yet unobserved resonance states. We use 3-quark interpolators for the baryons throughout and find no signal for a coupling to dynamically generated meson-baryon states in $p$- and $d$-wave channels. This is not so clear for the $s$ wave channels. These show several energy levels close to ground states in the $\ensuremath{\frac{1}{2}}^-$ channels. In these cases there could be mixing with the $s$ wave meson-baryon sectors. We want to mention that for all our ensembles (i.e., over the whole pion mass range) the Gell-Mann--Okubo formula \cite{GellMann:1961ky,Okubo:1961jc} is fulfilled with high precision. The values of the combination of the spin 1/2 positive parity octet ground state masses obey \begin{equation} \left| \frac{2 M_N+2 M_\Xi - M_\Sigma-3 M_\Lambda }{2 M_N+2 M_\Xi + M_\Sigma+3 M_\Lambda} \right|<0.03 \end{equation} for all pion masses studies here. We analyze the flavor symmetry content by identifying the singlet/octet/decuplet contributions. For the ground states agreement with the expectations from the quark model is found. In the $\ensuremath{\frac{1}{2}}^+$ nucleon channel the first excitation is considerably higher than the Roper resonance and one possible interpretation is, that the physical state couples very weakly to our interpolators. This may be also the case in the $\Lambda\ensuremath{\frac{1}{2}}^{+}$ channel, where the first excitation is dominated by singlet interpolators matching the $\Lambda(1810)$ (singlet in the quark model) and the Roper-like $\Lambda(1600)$ (octet in the quark model) seems to be missing. We study the systematic errors due to the final choice of interpolator sets and fit ranges and we also perform infinite volume extrapolations for the lowest energy levels. Because a slight mistuning of the strange quark mass is identified in two of the ensembles, we omit them in the final extrapolation to the physical pion mass. Remaining small deviations are expected to stem from systematic effects which cannot be identified uniquely given our limited dataset at a single lattice spacing with 2 dynamical quark flavors. In general, however, our results in the infinite volume limit compare favorably with experiment, as shown in Fig.~\ref{fig:infvol_summary}. \acknowledgments We would like to thank Elvira Gamiz, Christof Gattringer, Leonid Y.~Glozman, Markus Limmer, Willibald Plessas, Helios Sanchis-Alepuz, Mario Schr\"ock and Valentina Verduci for valuable discussions. The calculations have been performed on the SGI Altix 4700 of the Leibniz-Rechenzentrum Munich and on local clusters at UNI-IT at the University of Graz. We thank these institutions for providing support. G.P.E.~was partially supported by the MIUR--PRIN contract 20093BM-NPR. D.M.~acknowledges support by Natural Sciences and Engineering Research Council of Canada (NSERC) and G.P.E.~and A.S.~acknowledge support by the DFG project SFB/TR-55. Fermilab is operated by Fermi Research Alliance, LLC under Contract No. De-AC02-07CH11359 with the United States Department of Energy. \begin{appendix} \section{Tables of Baryon Interpolators} \label{sec:app_interpol} All interpolators are projected to definite parity using the projector \begin{equation} P^\pm=\frac{1}{2}(\mathds{1} \pm \gamma_t) \;. \label{eq:parproj} \end{equation} The resulting correlation matrices of positive and negative parity ($\pm$), \begin{equation} C^\pm_{ij}(t) = \pm Z_{ij}^\pm \mathrm{e}^{-tE^\pm} \pm Z_{ij}^\mp \mathrm{e}^{-(T-t)E^\mp} , \end{equation} are combined to the correlation matrices \begin{equation} C(t) = \frac{1}{2} \left( C^+(t) - C^-(T-t) \right) \; , \end{equation} which are then used in the variational method. All Rarita-Schwinger fields (spin 3/2 interpolators of Table \ref{tab:baryon:interpol:1}) are projected to definite spin 3/2 using the continuum formulation of the Rarita-Schwinger projector \cite{Lurie:1968} \begin{equation} P^{3/2}_{\mu \nu} (\vec{p}) = \delta_{\mu \nu} - \frac{1}{3} \gamma_{\mu} \gamma_{\nu} - \frac{1}{3p^2} ( \gamma \cdot p \, \gamma_{\mu} p_{\nu} + p_{\mu}\gamma_{\nu} \gamma \cdot p) \;. \label{eq:RaritaSchwinger} \end{equation} The baryon interpolators used in this work are detailed in Tables \ref{tab:baryon:interpol:1}, \ref{tab:baryon:interpol:2} and \ref{tab:baryon:interpol:3}. Table \ref{tab:baryon:interpol:1} shows the flavor structure for all interpolators. For the spin 1/2 channels of the nucleon, $\Sigma$, $\Xi$ and $\Lambda$, we use the three different Dirac structures $\chi^{(i)}=(\Gamma_1^{(i)},\Gamma_2^{(i)}),(i=1,2,3)$, listed in Table \ref{tab:baryon:interpol:2}. Details about the quark smearings in the interpolators are found in Table \ref{tab:baryon:interpol:3}. The name convention of all baryon interpolators is determined by Tables \ref{tab:baryon:interpol:2} and \ref{tab:baryon:interpol:3}. In the $\Lambda$ channels, singlet and octet interpolators are collected in one set. We assign to the first octet interpolator the number after the last singlet interpolator, and continue to count for the remaining octet interpolators. In the $\Sigma$ and $\Xi$ channels, the same holds for octet and decuplet interpolators. In the continuum, the actual number of independent fields is reduced by Fierz identities. In particular, there are no non-vanishing point-like interpolators for $\Delta(\ensuremath{\frac{1}{2}})$ and singlet $\Lambda(\frac{3}{2})$. However, using differently smeared quarks in the construction of interpolators, we do access independent information and find good signals for the singlet $\Lambda(\frac{3}{2})$ propagation. \begin{table*}[!] \begin{ruledtabular} \begin{tabular}{cccc} Spin & Flavor channel & Name & Interpolator \\ \hline $\frac{1}{2}$ & Nucleon & $N_{1/2}^{(i)}$ & $\epsilon_{abc}\, \Gamma_1^{(i)}\, u_a\, \big( u_b^T\, \Gamma_2^{(i)}\, d_c - d_b^T\, \Gamma_2^{(i)}\, u_c \big) $ \\ $\frac{1}{2}$ & Delta & $\Delta_{1/2}$ & $\epsilon_{abc}\, \gamma_i \gamma_5 u_a\, \big(u_b^T\, C\, \gamma_i\, u_c \big) $ \\ $\frac{1}{2}$ & Sigma octet & $\Sigma_{1/2}^{(8,i)}$ & $\epsilon_{abc}\, \Gamma_1^{(i)}\, u_a\, \big( u_b^T\, \Gamma_2^{(i)}\, s_c - s_b^T\, \Gamma_2^{(i)}\, u_c \big) $ \\ $\frac{1}{2}$ & Sigma decuplet & $\Sigma_{1/2}^{(10,i)}$ & $\epsilon_{abc}\, \gamma_i \gamma_5 u_a\, \big(u_b^T\, C\, \gamma_i\, s_c - s_b^T\, C\, \gamma_i\, u_c \big) $ \\ $\frac{1}{2}$ & Xi octet & $\Xi_{1/2}^{(8,i)}$ & $\epsilon_{abc}\, \Gamma_1^{(i)}\, s_a\, \big( s_b^T\, \Gamma_2^{(i)}\, u_c - u_b^T\, \Gamma_2^{(i)}\, s_c \big) $ \\ $\frac{1}{2}$ & Xi decuplet & $\Xi_{1/2}^{(10,i)}$ & $\epsilon_{abc}\, \gamma_i \gamma_5 s_a\, \big(s_b^T\, C\, \gamma_i\, u_c - u_b^T\, C\, \Gamma_i\, s_c \big) $ \\ $\frac{1}{2}$ & Lambda singlet & $\Lambda_{1/2}^{(1,i)}$ & $\epsilon_{abc} \Gamma^{(i)}_1 u_a ( d_b^T \Gamma^{(i)}_2 s_c - s_b^T \Gamma^{(i)}_2 d_c) $ \\ & & & $\,+ \, \mbox{cyclic permutations of}\; u, d, s $ \\ $\frac{1}{2}$ & Lambda octet & $\Lambda_{1/2}^{(8,i)}$ & $\epsilon_{abc} \Big[ \Gamma^{(i)}_1 s_a ( u_b^T \Gamma^{(i)}_2 d_c - d_b^T \Gamma^{(i)}_2 u_c ) $ \\ & & & $\, + \; \Gamma^{(i)}_1 u_a ( s_b^T \Gamma^{(i)}_2 d_c) - \Gamma^{(i)}_1 d_a ( s_b^T \Gamma^{(i)}_2 u_c) \Big] $ \\ $\frac{1}{2}$ & Omega & $\Omega_{1/2}$ & $\epsilon_{abc}\, \gamma_i \gamma_5 s_a\, \big(s_b^T\, C\, \gamma_i\, s_c \big) $ \\ \hline $\frac{3}{2}$ & Nucleon & $N_{3/2}^{(i)}$ & $\epsilon_{abc}\, \gamma_5 \, u_a\, \big( u_b^T\, C \gamma_5 \gamma_i\, d_c - d_b^T\, C \gamma_5 \gamma_i\, u_c \big) $ \\ $\frac{3}{2}$ & Delta & $\Delta_{3/2}^{(i)}$ & $\epsilon_{abc}\, u_a\, \big(u_b^T\, C\, \gamma_i\, u_c \big) $ \\ $\frac{3}{2}$ & Sigma octet & $\Sigma_{3/2}^{(8,i)}$ & $\epsilon_{abc}\, \gamma_5 \, u_a\, \big( u_b^T\, C \gamma_5 \gamma_i\, s_c - s_b^T\, C \gamma_5 \gamma_i\, u_c \big) $ \\ $\frac{3}{2}$ & Sigma decuplet & $\Sigma_{3/2}^{(10,i)}$ & $\epsilon_{abc}\, u_a\, \big( u_b^T\, C \gamma_i\, s_c - s_b^T\, C \gamma_i\, u_c \big) $ \\ $\frac{3}{2}$ & Xi octet & $\Xi_{3/2}^{(8,i)}$ & $\epsilon_{abc}\, \gamma_5 \, s_a\, \big( s_b^T\, C \gamma_5 \gamma_i\, u_c - u_b^T\, C \gamma_5 \gamma_i\, s_c \big) $ \\ $\frac{3}{2}$ & Xi decuplet & $\Xi_{3/2}^{(10,i)}$ & $\epsilon_{abc}\, s_a\, \big( s_b^T\, C \gamma_i\, u_c - u_b^T\, C \gamma_i\, s_c \big) $ \\ $\frac{3}{2}$ & Lambda singlet & $\Lambda_{3/2}^{(1,i)}$ & $\epsilon_{abc} \gamma_5 u_a ( d_b^T C \gamma_5 \gamma_i s_c - s_b^T C \gamma_5 \gamma_i d_c) $ \\ & & & $\,+ \, \mbox{cyclic permutations of}\; u, d, s $ \\ $\frac{3}{2}$ & Lambda octet & $\Lambda_{3/2}^{(8,i)}$ & $\epsilon_{abc} \Big[ \gamma_5 s_a ( u_b^T C \gamma_5 \gamma_i d_c - d_b^T C \gamma_5 \gamma_i u_c ) $ \\ & & & $\, + \; \gamma_5 u_a ( s_b^T C \gamma_5 \gamma_i d_c) - \gamma_5 d_a ( s_b^T C \gamma_5 \gamma_i u_c\Big] $ \\ $\frac{3}{2}$ & Omega & $\Omega_{3/2}^{(i)}$ & $\epsilon_{abc}\, s_a\, \big(s_b^T\, C\, \gamma_i\, s_c \big) $ \\ \end{tabular} \end{ruledtabular} \caption[Baryon interpolators: Flavor structure]{ Baryon interpolators: Flavor structure. The possible choices for the Dirac matrices $\Gamma_{1,2}^{(i)}$ in the spin 1/2 channels are listed in Table \ref{tab:baryon:interpol:1}. All interpolators are projected to definite parity according to Eq.~\eq{eq:parproj}. All spin 3/2 interpolators include the Rarita-Schwinger projector, according to Eq.~\eq{eq:RaritaSchwinger}, which is suppressed for clarity in the table. $C$ denotes the charge conjugation matrix, $\gamma_i$ the spatial Dirac matrices and $\gamma_t$ the Dirac matrix in time direction. Spin 1/2 and spin 3/2 channels are separated by a solid line. Summation convention applies for repeated indices, and in the case of spin 3/2 observables, the open Lorentz index (after spin projection) is summed after taking the expectation value of correlation functions. } \label{tab:baryon:interpol:1} \end{table*} \begin{table}[!] \begin{ruledtabular} \begin{tabular}{ccccc} i & $\Gamma^{(i)}_1$ & $\Gamma^{(i)}_2$ & \multicolumn{2}{c}{Numbering of associated interpolators} \\ & & & $N_{1/2},\Lambda_{1/2}^1,\Sigma_{1/2}^8,\Xi_{1/2}^8$ & $\Lambda_{1/2}^8,\Sigma_{1/2}^{10},\Xi_{1/2}^{10}$ \\ \hline $1$ & $\mathds{1}$ & $C\gamma_5$ & 1-8 & 25-32 \\ $2$ & $\gamma_5$ & $C$ & 9-16 & 33-40 \\ $3$ & $i\mathds{1}$ & $C\gamma_t\gamma_5$ & 17-24 & 41-48 \\ \end{tabular} \end{ruledtabular} \caption[Baryon interpolators: Dirac structure]{ Baryon interpolators: Dirac structures used for the spin 1/2 nucleon, $\Lambda$, $\Sigma$ and $\Xi$ interpolators, according to Table \ref{tab:baryon:interpol:1}. The naming convention (numbering) for associated interpolators in the different channels is given as well. The subscripts denote the spin, the superscripts the flavor irreducible representation. } \label{tab:baryon:interpol:2} \end{table} \begin{table}[!] \begin{ruledtabular} \begin{tabular}{c|cccc} quark & \multicolumn{4}{c}{Numbering of associated interpolators} \\ smearing & ~$\Delta_{1/2},\Delta_{3/2}$~ &$\Lambda_{3/2}^8,$ & ~$N_{1/2},\Lambda_{1/2}^1,$~ & $\Lambda_{1/2}^8,$ \\ & ~$\Omega_{1/2},\Omega_{3/2},$~ &$\Sigma_{3/2}^{10}$, & $\Sigma_{1/2}^8,\Xi_{1/2}^8$ & ~$\Sigma_{1/2}^{10},\Xi_{1/2}^{10}$ \\ & $N_{3/2},\Lambda_{3/2}^1$ &$\Xi_{3/2}^{10}$ & \\ & $\Sigma_{3/2}^8,\Xi_{3/2}^8$ &&& \\ \hline (nn)n & 1 & ~9 & 1,9,17 & 25,33,41 \\ (nn)w & 2 & 10 & 2,10,18 & 26,34,42 \\ (nw)n & 3 & 11 & 3,11,19 & 27,35,43 \\ (nw)w & 4 & 12 & 4,12,20 & 28,36,44 \\ (wn)n & 5 & 13 & 5,13,21 & 29,37,45 \\ (wn)w & 6 & 14 & 6,14,22 & 30,38,46 \\ (ww)n & 7 & 15 & 7,15,23 & 31,39,47 \\ (ww)w & 8 & 16 & 8,16,24 & 32,40,48 \\ \end{tabular} \end{ruledtabular} \caption[Baryon interpolators: Quark smearing types]{ Baryon interpolators: Quark smearing types and naming convention for the interpolators in the different channels. The subscripts denotes the spin, the superscripts the flavor irreducible representation. The brackets in the first row symbolize the diquark part. Due to Fierz identities, some of the interpolators may be linearly dependent. } \label{tab:baryon:interpol:3} \end{table} \section{Tables of Energy Levels and $\chi^2$}\label{sec:app_chisq} We give the results of our extrapolation (linear in $m_\pi^2$) to the physical pion mass together with the associated value of $\chi^2$/d.o.f.~in Tables~\ref{tab:chi2baryons_pospar} to \ref{tab:chi2_vol}. \begin{table} \begin{center} \begin{tabular}{lcc} \hline \hline Baryon: $I(J^P)$ & Energy level [MeV] & $\chi^2$/d.o.f. \\ \hline $N:\,1/2(1/2^+)$ & 1000(18) & 2.16/5 \\ $N:\,1/2(1/2^+)$ & 1848(120) & 3.61/5 \\ $N:\,1/2(1/2^+)$ & 1998(59) & 18.31/5 \\ $N:\,1/2(1/2^+)$ & 2543(280) & 1.96/3 \\ $\Delta:\,3/2(1/2^+)$ & 1751(190) & 1.58/5 \\ $\Delta:\,3/2(1/2^+)$ & 2211(126) & 1.15/5 \\ $\Lambda:\,0(1/2^+)$ & 1149(18) & 1.89/3 \\ $\Lambda:\,0(1/2^+)$ & 1807(94) & 4.63/5 \\ $\Lambda:\,0(1/2^+)$ & 2112(54) & 20.27/5 \\ $\Lambda:\,0(1/2^+)$ & 2137(68) & 1.50/5 \\ $\Sigma:\,1(1/2^+)$ & 1216(15) & 6.94/5 \\ $\Sigma:\,1(1/2^+)$ & 2069(74) & 3.41/5 \\ $\Sigma:\,1(1/2^+)$ & 2149(66) & 20.37/5 \\ $\Sigma:\,1(1/2^+)$ & 2335(63) & 2.09/5 \\ $\Xi:\,1/2(1/2^+)$ & 1303(13) & 8.31/5 \\ $\Xi:\,1/2(1/2^+)$ & 2178(48) & 7.51/5 \\ $\Xi:\,1/2(1/2^+)$ & 2231(44) & 26.53/5 \\ $\Xi:\,1/2(1/2^+)$ & 2408(45) & 10.37/5 \\ $\Omega:\,0(1/2^+)$ & 2350(63) & 4.14/5 \\ $\Omega:\,0(1/2^+)$ & 2481(51) & 4.35/5 \\ \hline $N:\,1/2(3/2^+)$ & 1773(91) & 8.35/5 \\ $N:\,1/2(3/2^+)$ & 2298(191) & 3.79/5 \\ $\Delta:\,3/2(3/2^+)$ & 1344(27) & 6.13/5 \\ $\Delta:\,3/2(3/2^+)$ & 2204(82) & 6.23/5 \\ $\Lambda:\,0(3/2^+)$ & 1991(103) & 3.56/3 \\ $\Lambda:\,0(3/2^+)$ & 2058(139) & 23.04/5 \\ $\Lambda:\,0(3/2^+)$ & 2481(111) & 4.26/5 \\ $\Sigma:\,1(3/2^+)$ & 1471(23) & 2.52/5 \\ $\Sigma:\,1(3/2^+)$ & 2194(81) & 4.78/5 \\ $\Sigma:\,1(3/2^+)$ & 2250(79) & 7.05/5 \\ $\Sigma:\,1(3/2^+)$ & 2468(67) & 4.22/5 \\ $\Xi:\,1/2(3/2^+)$ & 1553(18) & 3.78/5 \\ $\Xi:\,1/2(3/2^+)$ & 2228(40) & 6.99/5 \\ $\Xi:\,1/2(3/2^+)$ & 2398(52) & 7.03/5 \\ $\Xi:\,1/2(3/2^+)$ & 2574(52) & 4.26/5 \\ $\Omega:\,0(3/2^+)$ & 1642(17) & 10.86/5 \\ $\Omega:\,0(3/2^+)$ & 2470(49) & 8.14/5 \\ \hline \hline \end{tabular} \end{center} \caption{ Energy levels at the physical pion mass and corresponding $\chi^2$/d.o.f.~for the chiral fits of the positive baryon energy levels reported in this work. Sources of large $\chi^2$/d.o.f.~($\geq 3$) are discussed in the text. Spin 1/2 and spin 3/2 baryons are separated by a line. Given errors are statistical only. } \label{tab:chi2baryons_pospar} \end{table} \begin{table} \begin{center} \begin{tabular}{lcc} \hline \hline Baryon: $I(J^P)$ & Energy level [MeV] & $\chi^2$/d.o.f. \\ \hline $N:\,1/2(1/2^-)$ & 1406(49) & 6.51/5 \\ $N:\,1/2(1/2^-)$ & 1539(69) & 8.72/5 \\ $N:\,1/2(1/2^-)$ & 1895(128) & 6.35/5 \\ $N:\,1/2(1/2^-)$ & 1918(211) & 5.94/5 \\ $\Delta:\,3/2(1/2^-)$ & 1454(140) & 11.16/5 \\ $\Delta:\,3/2(1/2^-)$ & 1914(322) & 3.24/5 \\ $\Lambda:\,0(1/2^-)$ & 1416(81) & 1.25/3 \\ $\Lambda:\,0(1/2^-)$ & 1546(110) & 0.57/3 \\ $\Lambda:\,0(1/2^-)$ & 1713(116) & 3.49/3 \\ $\Lambda:\,0(1/2^-)$ & 2075(249) & 13.56/5 \\ $\Sigma:\,1(1/2^-)$ & 1603(38) & 7.45/5 \\ $\Sigma:\,1(1/2^-)$ & 1718(58) & 12.78/5 \\ $\Sigma:\,1(1/2^-)$ & 1730(34) & 10.79/5 \\ $\Sigma:\,1(1/2^-)$ & 2478(104) & 11.94/5 \\ $\Xi:\,1/2(1/2^-)$ & 1716(43) & 19.10/5 \\ $\Xi:\,1/2(1/2^-)$ & 1837(28) & 20.25/5 \\ $\Xi:\,1/2(1/2^-)$ & 1844(43) & 15.75/5 \\ $\Xi:\,1/2(1/2^-)$ & 2758(78) & 5.61/5 \\ $\Omega:\,0(1/2^-)$ & 1944(56) & 20.48/5 \\ $\Omega:\,0(1/2^-)$ & 2716(118) & 8.58/5 \\ \hline $N:\,1/2(3/2^-)$ & 1634(44) & 14.75/5 \\ $N:\,1/2(3/2^-)$ & 1982(128) & 7.40/5 \\ $N:\,1/2(3/2^-)$ & 2296(129) & 9.59/5 \\ $\Delta:\,3/2(3/2^-)$ & 1570(67) & 4.01/5 \\ $\Delta:\,3/2(3/2^-)$ & 2373(140) & 17.97/5 \\ $\Lambda:\,0(3/2^-)$ & 1751(41) & 1.42/3 \\ $\Lambda:\,0(3/2^-)$ & 2203(106) & 3.97/5 \\ $\Lambda:\,0(3/2^-)$ & 2381(87) & 6.48/5 \\ $\Sigma:\,1(3/2^-)$ & 1861(26) & 6.33/5 \\ $\Sigma:\,1(3/2^-)$ & 1736(40) & 2.25/5 \\ $\Sigma:\,1(3/2^-)$ & 2394(74) & 9.73/5 \\ $\Sigma:\,1(3/2^-)$ & 2297(122) & 3.90/5 \\ $\Xi:\,1/2(3/2^-)$ & 1906(29) & 3.12/5 \\ $\Xi:\,1/2(3/2^-)$ & 1894(38) & 3.19/5 \\ $\Xi:\,1/2(3/2^-)$ & 2497(61) & 8.53/5 \\ $\Xi:\,1/2(3/2^-)$ & 2426(73) & 7.60/5 \\ $\Omega:\,0(3/2^-)$ & 2049(32) & 7.32/5 \\ $\Omega:\,0(3/2^-)$ & 2755(67) & 5.68/5 \\ \hline \hline \end{tabular} \end{center} \caption{ Same as Table \ref{tab:chi2baryons_pospar}, but for negative parity baryons. Spin 1/2 and spin 3/2 baryons are separated by a line. } \label{tab:chi2baryons_negpar} \end{table} \begin{table} \begin{center} \begin{tabular}{lccc} \hline \hline Hadron & $I(J^P)$ & Energy level [MeV] & $\chi^2$/d.o.f. \\ \hline $N$ &$1/2(1/2^+)$ & 954(16) & 2.26/5 \\ $\Lambda$ &$0(1/2^+)$ & 1126(17)(+07) & 2.74/3 \\ $\Sigma$ &$1(1/2^+)$ & 1176(19)(+07) & 6.67/3 \\ $\Xi$ &$1/2(1/2^+)$ & 1299(16)(+15) & 5.05/3 \\ \hline $\Delta$ &$3/2(3/2^+)$ & 1268(32) & 8.67/5 \\ $\Lambda$ &$0(3/2^+)$ & 1880(116)(+07) & 2.38/3 \\ $\Sigma$ &$1(3/2^+)$ & 1431(25)(+07) & 2.29/3 \\ $\Xi$ &$1/2(3/2^+)$ & 1540(22)(+15) & 2.05/3 \\ $\Omega$ &$0(3/2^+)$ & 1650(20)(+22) & 3.10/3 \\ \hline $\Lambda $ &$0(1/2^-)$ & 1436(84)(+07) & 1.25/3 \\ $\Lambda $ &$0(1/2^-)$ & 1635(70)(+07) & 4.93/3 \\ $\Lambda $ &$0(1/2^-)$ & 1664(66)(+07) & 3.49/3 \\ \hline $\Lambda $ &$0(3/2^-)$ & 1712(51)(+07) & 2.92/3 \\ \hline \hline \end{tabular} \end{center} \caption{ Same as Table \ref{tab:chi2baryons_pospar}, but for hadrons after the infinite volume extrapolation. The horizontal line separates different parity and spin. Notice that the $\Omega$ mass is not a prediction of our calculation. The second errors given are na\"ive estimates for the systematic error from a mistuning of the strange quark mass.} \label{tab:chi2_vol} \end{table} \end{appendix} \clearpage
1711.03297
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \makeatother \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{array}}{\begin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{\partial}{\partial} \newcommand{\scriptscriptstyle}{\scriptscriptstyle} \newcommand{\scriptstyle}{\scriptstyle} \newcommand{\displaystyle}{\displaystyle} \newcommand{\theta}{\theta} \newcommand{\varepsilon}{\varepsilon} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \addtolength{\topmargin}{-0.5pc} \addtolength{\textheight}{1.pc} \begin{document} \begin{titlepage} \begin{flushright} \phantom{arXiv:yymm.nnnn} \end{flushright} \vspace{0cm} \begin{center} {\LARGE\bf Klein-Gordonization:\vspace{1mm}}\\ {\large\bf mapping superintegrable quantum mechanics\vspace{1.5mm}\\ to resonant spacetimes} \\ \vskip 10mm {\large Oleg Evnin,$^{a,b}$ Hovhannes Demirchian$^c$ and Armen Nersessian$^{d,e}$} \vskip 7mm {\em $^a$ Department of Physics, Faculty of Science, Chulalongkorn University,\\ Bangkok 10330, Thailand} \vskip 3mm {\em $^b$ Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB) and\\ The International Solvay Institutes, Brussels 1050, Belgium} \vskip 3mm {\em $^c$ Ambartsumian Byurakan Astrophysical Observatory, Byurakan 0213, Armenia} \vskip 3mm {\em $^d$ Yerevan Physics Institute, 2 Alikhanyan Brothers St., 0036 Yerevan, Armenia} \vskip 3mm {\em $^e$ Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research,\\ Dubna 141980, Russia} \vskip 7mm {\small\noindent {\tt oleg.evnin@gmail.com, demhov@yahoo.com, arnerses@ysu.am}} \end{center} \vspace{1cm} \begin{center} {\bf ABSTRACT}\vspace{3mm} \end{center} We describe a procedure naturally associating relativistic Klein-Gordon equations in static curved spacetimes to non-relativistic quantum motion on curved spaces in the presence of a potential. Our procedure is particularly attractive in application to (typically, superintegrable) problems whose energy spectrum is given by a quadratic function of the energy level number, since for such systems the spacetimes one obtains possess evenly spaced, resonant spectra of frequencies for scalar fields of a certain mass. This construction emerges as a generalization of the previously studied correspondence between the Higgs oscillator and Anti-de Sitter spacetime, which has been useful for both understanding weakly nonlinear dynamics in Anti-de Sitter spacetime and algebras of conserved quantities of the Higgs oscillator. Our conversion procedure (``Klein-Gordonization'') reduces to a nonlinear elliptic equation closely reminiscent of the one emerging in relation to the celebrated Yamabe problem of differential geometry. As an illustration, we explicitly demonstrate how to apply this procedure to superintegrable Rosochatius systems, resulting in a large family of spacetimes with resonant spectra for massless wave equations. \vfill \end{titlepage} \section{Introduction} Geometrization of dynamics is a recurrent theme in theoretical physics. While it has underlied such fundamental developments as the creation of General Relativity and search for unified theories of interactions, it also has a more modest (but often fruitful) aspect of reformulating conventional, well-established theories in more geometrical terms, in hope of elucidating their structure. One particular approach of the latter type is the Jacobi metric (for a contemporary treatment, see \cite{jacobi1,jacobi2,jacobi3}). This energy-dependent metric simply encodes as its geodesics the classical orbits of a nonrelativistic mechanical particle on a manifold moving in a potential. The geometrization program we propose here can be seen as a quantum counterpart of the Jacobi metric. To a nonrelativistic quantum particle on a manifold moving in a potential, we shall associate a relativistic Klein-Gordon equation in a static spacetime of one dimension higher. Since the Klein-Gordon equation can be seen as a sort of quantization of geodesics (and reduces to the geodesic equation in the eikonal approximation), this provides a quantized version of the correspondence between particle motion on a manifold in the presence of a potential and purely geometric geodesic motion in the corresponding spacetime. Executing our geometrization algorithm in general reduces to a nonlinear elliptic equation closely reminiscent of the one emerging in relation to the Yamabe problem and its generalizations known as prescribed scalar curvature problems \cite{yamabe1,yamabe2,prescr}, and thus connects to extensive literature and interesting questions in differential geometry. (The Yamabe problem refers to constructing a conformal transformation of the given metric on a manifold that makes the Ricci scalar of the conformally transformed metric constant.) While the correspondence we build may in principle operate on any system, we are primarily motivated by its application to a very special class of quantum systems whose energy is a quadratic function of the energy level number. Such systems are exemplified by the one-dimensional P\"oschl-Teller potential, and in higher dimensions they are typically superintegrable. In fact, our construction has been developed precisely as a generalization of the correspondence \cite{EK,EN,EN2} between the Higgs oscillator \cite{Higgs,Leemon}, a particularly simple superintegrable system with a quadratic spectrum, and Klein-Gordon equations on the Anti-de Sitter (AdS) spacetime, the maximally symmetric spacetime of constant negative curvature. This correspondence has emerged in the context of studying selection rules \cite{CEV1,CEV2,Yang} in the nonlinear perturbation theory targeting the AdS stability problem \cite{BR,review}. The correspondence has been useful for both elucidating the structure of AdS perturbation theory \cite{EN} and for resolving the old problem of constructing explicit hidden symmetry generators for the Higgs oscillator \cite{EN2}. The reason for our emphasis on systems with quadratic spectra is that, in application to such systems, our geometrization program generates Klein-Gordon equations whose frequency spectra are linear in the frequency level number, and hence the spectrum is highly resonant (the difference of any two frequencies is integer in appropriate units). It is well-known that in the context of weakly nonlinear dynamics, highly resonant spectra have a dramatic impact, as they allow arbitrarily small nonlinear perturbations to produce arbitrarily large effects over long times. This feature has been crucial in the extensive investigations of the AdS stability problem in the literature (for a brief review and references, see \cite{review}). The main practical target of our geometrization program thus appears twofold: \begin{itemize} \item to provide geometric counterparts for quantum systems with quadratic spectra (the resulting Klein-Gordon equation is set up on a highly special spacetime with a resonant spectrum of frequencies and the geometric properties of this spacetime are likely to yield insights into the algebraic properties of the original quantum system, including its high degree of degeneracy and hidden symmetries), \item to generate, starting from known quantum systems with quadratic spectra, highly resonant spacetimes (weakly nonlinear dynamics in such spacetimes is likely to be very sophisticated, sharing the features of the extensively explored weakly nonlinear dynamics of AdS). \end{itemize} The plan of the paper is as follows. In section 2, we formulate our general geometrization procedure and describe how it simplifies for the case of zero mass in the Klein-Gordon equation one is aiming to construct. In section 3, we describe how the previously known correspondence \cite{EK,EN,EN2} between the Higgs oscillator and AdS fits in our general framework. In section 4, we analyze the superintegrable Rosochatius system, which generalizes the Higgs oscillator, and generate a large family of spacetimes perfectly resonant with respect to the massless wave equation. We conclude with a review of the current state of our formalism and open problems. \section{Klein-Gordon} \subsection{General formulation of the Klein-Gordonization procedure} Consider a quantum system with the Hamiltonian \begin{equation} H=-\Delta_\gamma + V(x), \label{Hamlt} \end{equation} where $\Delta_\gamma\equiv \gamma^{-1/2}\partial_i(\gamma^{1/2}\gamma^{ij}\partial_j)$ is the Laplacian on a $d$-dimensional manifold parametrized with $x^i$ and endowed with the metric $\gamma_{ij}$. We shall be particularly interested in systems whose energy spectrum consists of (in general, degenerate) energy levels labelled by the level number $N=0$, 1, ..., and the energy is a quadratic function of the level number: \begin{equation} E_N= A(N+B)^2-C. \label{quadrenergy} \end{equation} Such spectra are indeed observed in a number of interesting systems, typically involving superintegrability, for example: \begin{itemize} \item The Higgs oscillator \cite{Higgs,Leemon}, which is a particle on a $d$-sphere moving in a potential varying as the inverse cosine-squared of the polar angle. \item The superintegrable version \cite{rossup1,rossup2} of the Rosochatius system on a $d$-sphere \cite{rosochatius,encycl}, which is the most direct generalization of the Higgs oscillator. \item The quantum angular Calogero-Moser model \cite{FLP}. \item The (spherical) Calogero-Higgs system \cite{HLN,CHLN}. \end{itemize} We additionally mention the following two completely elementary systems which give a particularly simple realization of the quadratic spectrum (\ref{quadrenergy}): \begin{itemize} \item A particle in one dimension in an infinite rectangular potential well. \item The trigonometric P\"oschl-Teller system \cite{PT}. \end{itemize} We would like to associate to any system of the form (\ref{Hamlt}) a Klein-Gordon equation in a certain static $(d+1)$-dimensional space-time. We introduce a scalar field $\tilde\phi(t,x)$ satisfying \begin{equation}\label{secondorder} -\partial_t^2\tilde\phi=(-\Delta_\gamma+V(x)+C)\tilde\phi. \end{equation} In the above expression, $C$ can in principle be an arbitrary constant, but our main focus will be on systems with energy spectrum of the form (\ref{quadrenergy}) and $C$ read off from (\ref{quadrenergy}). One can equivalently write (\ref{secondorder}) as \begin{equation}\label{KGwannabe} \Box_{\tilde g} \tilde\phi -(C+V(x))\tilde\phi=0. \end{equation} Where $\Box_{\tilde g}$ is the D'Alembertian of the metric \begin{equation}\label{tildeg} \tilde g_{\mu\nu}dx^\mu dx^\nu=-dt^2+\gamma_{ij}dx^i dx^j, \end{equation} with $x^\mu=(t,x^i)$. By construction, if one implements separation of variables in (\ref{KGwannabe}) in the form \begin{equation} \tilde\phi=e^{i w t}\Psi(x), \end{equation} one recovers the original Schr\"odinger equation as $H\Psi=(w^2-C)\Psi$. This guarantees that the mode functions of (\ref{KGwannabe}) are directly related to the energy eigenstates of the original quantum-mechanical problem. Note that, if one focuses on systems with energy spectra of the form (\ref{quadrenergy}), by construction, separation of variables in (\ref{secondorder}) will lead to eigenmodes with linearly spaced frequencies: \begin{equation}\label{omegaN} w_N=\sqrt{A}(N+B). \end{equation} In this case, after conversion to the Klein-Gordon form, which we shall undertake below, the resulting spacetime will possess a resonant spectrum of frequencies. Equation (\ref{secondorder}) is not of a Klein-Gordon form, but we can try to put in this form by applying a conformal rescaling to $\tilde g$ and $\tilde\phi$: \begin{equation} \tilde g_{\mu\nu}=\Omega^2 g_{\mu\nu},\qquad \tilde\phi=\Omega^{\frac{1-d}2}\phi. \end{equation} One thus gets (relevant conformal transformation formulas can be retrieved, e.g., from \cite{BD}) \begin{equation} \Box_g\phi - \left[(C+V(x))\Omega^2+\frac{d-1}2\frac{\Box_g\Omega}{\Omega}+\frac{(d-1)(d-3)}4\frac{g^{\mu\nu}\partial_\mu\Omega\partial_\nu\Omega}{\Omega^2}\right]\phi=0. \end{equation} If the expression in the square brackets can be made constant by a suitable choice of $\Omega$, we get a Klein-Gordon equation in a spacetime with the metric $g_{\mu\nu}$. We thus need to solve the equation \begin{equation}\label{conformal} (C+V(x))\Omega^2+\frac{d-1}2\frac{\Box_g\Omega}{\Omega}+\frac{(d-1)(d-3)}4\frac{g^{\mu\nu}\partial_\mu\Omega\partial_\nu\Omega}{\Omega^2}=m^2. \end{equation} It is wiser to rewrite this equation through the metric $\tilde g$, which is already known and given by (\ref{tildeg}): \begin{equation} \frac{d-1}2\Omega{\Box_{\tilde g}\Omega}-\frac{d^2-1}4\tilde g^{\mu\nu}\partial_\mu\Omega\partial_\nu\Omega+(C+V(x))\Omega^2=m^2. \end{equation} Since neither $V(x)$ nor $\tilde g_{\mu\nu}$ depend on $t$, one can assume that $\Omega$ is a function of $x^i$ as well. Hence, \begin{equation}\label{confgamma} \frac{d-1}2\Omega{\Delta_{\gamma}\Omega}-\frac{d^2-1}4\gamma^{\ij}\partial_i\Omega\partial_j\Omega+(C+V(x))\Omega^2=m^2. \end{equation} Note that (\ref{conformal}) is closely reminiscent of the equation emerging from the following purely geometrical problem: Consider a metric $g_{\mu\nu}$ whose Ricci scalar is $R(x)$. Is it possible to find $\Omega$ such that the Ricci scalar corresponding to $\tilde g_{\mu\nu}=\Omega^2 g_{\mu\nu}$ has a given form $\tilde R(x)$? Indeed, from the standard formulae for the change of the Ricci scalar under conformal transformations, see, e.g., (3.4) of \cite{BD}, one gets \begin{equation}\label{rrtilde} \Omega^2 \tilde R(x)= R(x)+2d\frac{\Box_g\Omega}{\Omega}+d(d-3)\frac{g^{\mu\nu}\partial_\mu\Omega\partial_\nu\Omega}{\Omega^2}. \end{equation} Algebraically, this has the same structure as (\ref{conformal}). Equations of the form (\ref{rrtilde}) for simple specific choices of $g$ and $\tilde R$ have been studied in mathematical literature as various realizations of the `prescribed scalar curvature' problem \cite{prescr}. Substitution \begin{equation} \Omega=\omega^{-\frac2{d-1}}, \end{equation} reduces (\ref{confgamma}) to the following compact form \begin{equation} -\Delta_\gamma \omega+(C+V(x))\omega=m^2\omega^\frac{d+3}{d-1}, \label{yma} \end{equation} closely reminiscent to the equation arising in relation to the Yamabe problem \cite{yamabe1,yamabe2,prescr}. (Note that the specific power of $\omega$ appearing on the righ-hand side of this equation is different from the standard Yamabe problem. This is because we are performing a conformal transformation in a spacetime of one dimension higher, rather than in the original space.) Once (\ref{yma}) has been solved, the spacetime providing geometrization of the original problem (\ref{Hamlt}) can be written explicitly as \begin{equation} g_{\mu\nu}dx^\mu dx^\nu=\omega^{\frac{4}{d-1}}\left(-dt^2+\gamma_{ij}dx^i dx^j\right). \label{gsol} \end{equation} Equation (\ref{confgamma}) dramatically simplifies in one spatial dimension ($d=1$), where all the derivative terms drop out, leaving $\Omega\sqrt{C+V(x)}=m$. Thus, for the particle in an infinite rectangular potential well, Klein-Gordonization gives a massless wave equation on a slice of Minkowski space between two mirrors, while for the P\"oschl-Teller system, one immediately obtains a two-dimensional spacetime metric reminiscent of Anti-de Sitter spacetime AdS$_2$. This latter result displays some parallels to the considerations of \cite{CJP} (focusing in the hyperbolic P\"oschl-Teller system). As we already briefly remarked, the above geometrization procedure can be applied to any Hamiltonian of the form (\ref{Hamlt}) and any $C$, irrespectively of the form of the spectrum. However, it is precisely for the spectrum and $C$ given by (\ref{quadrenergy}) that the resulting spacetime possesses the remarkable property of being highly resonant (and one may expect that its geometric properties will give a more transparent underlying pictures of the algebraic structurs of the original quantum-mechanical problem, as happens for the Higgs oscillator). We shall therefore focus on the application of our geometrization procedure to such systems with quadratic energy spectra. \subsection{The massless case}\label{massless} Equation (\ref{yma}) is a nonlinear elliptic equation and in general difficult to solve. Extensive existence result are established for an algebraically similar equation arising in relation to the Yamabe problem, hence one may hope that some level of understanding of solutions to (\ref{yma}) in full generality may also be attained in the future. We shall not pursue such systematic analysis here, however. Driven by practical goals of constructing resonant spacetimes and geometrizing concrete superintegrable systems, we would like to point out that (\ref{yma}) becomes linear and dramatically simplifies if one assumes $m^2=0$. Hence, converting a given quantum mechanical problem to a massless wave equation is considerably simpler than for general values of the mass. We note that, if $m^2=0$, equation (\ref{yma}) looks identical to the Schr\"odinger equation corresponding to the Hamiltonian (\ref{Hamlt}), with energy eigenvalue $-C$: \begin{equation} -\Delta_\gamma \omega+V(x)\omega=-C\omega, \label{yma0} \end{equation} (Normalizable eigenstates of this energy do not generically exist, but $\omega$ does not have to satisfy the same normalizability conditions as standard wave functions, hence this should not be a problem.) Since quadratic spectra (\ref{quadrenergy}) are seen to arise from highly structured, typically superintegrable, systems, one may naturally expect that (\ref{yma0}) is amenable to analytic treatment. There is one further assumption one might make that immediately yields solutions of (\ref{yma0}) from known solutions of the original quantum-mechanical problem (\ref{Hamlt}). Namely, imagine one has an $K$-parameter family of Hamiltonians (\ref{Hamlt}) with quadratic spectra (\ref{quadrenergy}). In this case, $A$, $B$, and $C$ are functions of the $K$ parameters defining our family of Hamiltonians. One may impose \begin{equation} B=0, \end{equation} which generically yields an $(K-1)$-parameter subfamily of quantum systems with quadratic spectra. Within this subfamily, the ground state $\Psi_0$ has the energy $-C$, i.e., $H\Psi_0=-C\Psi_0$. Hence, $\omega$ satisfying (\ref{yma0}) can be chosen as the vacuum state of $H$: \begin{equation} \omega=\Psi_0. \label{omegaPsi} \end{equation} We shall make use of this construction below, as it allows for a straightforward application of our methodology to known exactly solvable systems. (In some cases, it is geometrically advantageous to use the non-normalizable counterpart of $\Psi_0$ with the same energy eigenvalue to define $\omega$. Such non-normalizable states should also be easy to construct for exactly solvable systems with quadratic spectra. We shall see an explicit realization of this scenario in our subsequent treatment of the superintegrable Rosochatius problem.) As a variation of the above special case, one could force $B$ of (\ref{quadrenergy}) to be equal to a negative integer and $\omega$ to be equal to an excited state wavefunction. This, however, introduces singularities in the conformally rescaled spacetime (\ref{gsol}) at the location of zeros of the excited state wavefunctions. While one could still try to pursue this scienario by imposing appropriate constraints on the wave equation solution at the singular locus, we shall concentrate below on the most straightforward formulation (\ref{omegaPsi}) utilizing the ground state wavefunction, where the conformal factor is non-vanishing and no such subtleties arise. \section{Higgs} Before proceeding with novel derivations we would like to demonstrate how the case of the Higgs oscillator, which has motivated our general construction, fits into our present framework. We are essentially just reviewing the derivations in \cite{EK,EN,EN2}. The Higgs oscillator is a particle on a sphere moving in a specific centrally symmetric potential (which we shall specify below). It is remarkable for being one of only three centrally symmetric maximally superintegrable systems on a sphere (together with free motion and the spherical Coulomb potential). A practical manifestation of superintegrability is that all of its classical trajectories are closed. The quantum version of this system has attracted considerable attention after it was reintroduced in a different guise and solved in \cite{ES}. The observed high degeneracy of energy levels of this system prompted investigation of its hidden symmetries in \cite{Higgs,Leemon}, which resulted in identification of the hidden $SU(d)$ group of symmetries for a system on a $d$-sphere, and spawned extensive literature on algebras of conserved quantities of the Higgs oscillator. The energy spectum of the Higgs oscillator is of the form (\ref{quadrenergy}). We shall now define, with some geometric preliminaries, the Higgs oscillator Hamiltonian. Consider a unit $d$-sphere embedded in a (d+1)-dimensional flat space as \begin{equation} x_0^2+x_1^2+\cdots+x_{d}^2=1 \end{equation} and parametrized by the angles $\theta_1$, ..., $\theta_{d}$ as \begin{align}\label{xsphere} &x_{d}=\cos\theta_{d},\qquad x_{d-1}=\sin\theta_{d}\cos\theta_{d-1},\\ &x_1=\sin\theta_{d}\ldots\sin\theta_2\cos\theta_1,\qquad x_0=\sin\theta_{d}\ldots\sin\theta_2\sin\theta_1.\nonumber \end{align} The sphere is endowed with the standard round metric defined recursively in $d$ \begin{equation} ds^2_{S^d}=d\theta_{d}^2+\sin^2\theta_{d}ds^2_{S^{d-1}},\qquad ds^2_{S^1}=d\theta_{1}^2. \end{equation} Similarly, the corresponding Laplacian is defined recursively \begin{equation}\label{Deltasphere} \Delta_{S^d}=\frac1{\sin^{d-1}\theta_{d}}\partial_{\theta_{d}}\left(\sin^{d-1}\theta_{d}\,\partial_{\theta_{d}}\right)+\frac1{\sin^2\theta_{d}}\Delta_{S^{d-1}},\qquad \Delta_{S^1}=\partial^2_{\theta_1}. \end{equation} The Higgs oscillator is a particle on a $d$-sphere moving in a potential varying as the inverse cosine-squared of the polar angle: \begin{equation} H=-\Delta_{S^d}+\frac{\alpha(\alpha-1)}{\cos^2\theta_d}. \label{higgsH} \end{equation} The energy spectrum is given by \begin{equation}\label{Higgsenrg} E_N=\left(N+\alpha+\frac{d-1}2\right)^2-\frac{(d-1)^2}4, \end{equation} where $N$ is the energy level number. This expression is manifestly of the form (\ref{quadrenergy}). To implement our geometrization program for the Higgs oscillator, one can work directly with (\ref{confgamma}), which takes the form \begin{equation} \frac{d-1}2\frac{\Omega}{\sin^{d-1}\theta_d}\partial_{\theta_d}(\sin^{d-1}\theta_d\,\partial_{\theta_d} \Omega)-\frac{d^2-1}4(\partial_{\theta_d}\Omega)^2+\left(C+\frac{\alpha(\alpha-1)}{\cos^2\theta_d}\right)\Omega^2=m^2. \end{equation} Substituting $\Omega=\cos\theta_d$ produces only two constraints on the parameters to ensure that the equation is satisfied: \begin{equation} C=\frac{(d-1)^2}4,\qquad m^2=\alpha(\alpha-1)+\frac{d^2-1}4. \end{equation} The value of $C$ above agrees with the one in (\ref{Higgsenrg}), while the relation between the Klein-Gordon mass and the Higgs potential strength is the same as found in \cite{EK}. The output of our construction is thus a family of Klein-Gordon equations on the spacetime \begin{equation}\label{AdSHiggs} ds^2=\frac{-dt^2+ds^2_{S^d}}{\cos^2\theta_d}, \end{equation} which is precisely the (global) Anti-de Sitter spacetime AdS$_{d+1}$. We note that rational values of $\alpha$ in (\ref{higgsH}) correspond to Klein-Gordon masses in AdS for which the frequency spectrum (\ref{omegaN}) is perfectly resonant (all frequencies are integer in appropriate units) rather than merely strongly resonant (differences of any two frequencies are integer in appropriate units). A remarkable property of the Higgs oscillator is that the metric (\ref{AdSHiggs}) does not depend on the Higgs potential strength (which only affects the value of the Klein-Gordon mass). This feature is not replicated for more complicated potentials. Conversely, this implies that the AdS spacetime possesses a resonant spectrum of frequencies for fields of all masses (this statement can in fact be extended to fields of higher spins), rather than for fields of one specific mass. It is tempting to conjecture that AdS (being a maximally symmetric spacetime) is the only spacetime with this property, though we do not know a proof. Relations between Klein-Gordon equations of different masses have recently surfaced in the literature on ``mass ladder operators'' \cite{mass1,mass2,mass3,mass4}. \section{Rosochatius} \subsection{The superintegrable Rosochatius system} The superintegrable Rosochatius system is the most direct generalization of the Higgs oscillator on a $d$-sphere preserving its superintegrability. General Rosochatius systems \cite{rosochatius} were among the first Liouville-integrable systems discovered. A restriction on the potential makes these systems maximally superintegrable. The Higgs oscillator can be recovered by a further restriction of the potential as a particularly simple special case. Such systems are thus an ideal testing ground for applying our machinery, which has already been shown to work for the Higgs oscillator. The superintegrable Rosochatius systems we shall deal with here are defined by the following family of Hamiltonians: \begin{equation} H^{(R)}_d=-\Delta_{S^d}+\sum_{k=0}^{d}\frac{\alpha_k(\alpha_k-1)}{x_k^2}. \label{rosH} \end{equation} The explicit form of the Laplacian and coordinates on the unit $d$-sphere can be read off from (\ref{xsphere}-\ref{Deltasphere}). The standard more general definition of the Rosochatius system \cite{rosochatius,encycl} additionally includes a harmonic potential with respect to the $x_k$ variables, $\sum_k \gamma_kx_k^2$, which gives an integrable system. If this harmonic potential is omitted, as we did above, the system becomes maximally superintegrable, as mentioned, for instance, in \cite{rossup1,rossup2}. In order to find the spectrum of the above Hamiltonian, we shall have to apply recursively the solution of the famed one-dimensional P\"oschl-Teller problem \cite{PT}. While this material is completely standard and occasionally covered in textbooks, we find the summary given in \cite{IH} concise and convenient. The energy eigenstates of the P\"oschl-Teller Hamiltonian \begin{equation} H_{PT}=-\partial_x^2+\frac{\mu(\mu-1)}{\cos^2x}+\frac{\nu(\nu-1)}{\sin^2x} \label{HPT} \end{equation} are given by \begin{equation} \varepsilon_n=(\mu+\nu+2n)^2,\qquad n=0,1,2,\cdots \label{PTenergy} \end{equation} We shall not need the explicit form of the eigenfunctions satisfying $H_{PT}\Psi_n=\varepsilon_n\Psi_n$ (though it is known). Because of the recursion relations on $d$-spheres outlined above, the Rosochatius Hamiltonian (\ref{rosH}) can likewise be defined recursively: \begin{align} &H^{(R)}_d=-\frac1{\sin^{d-1}\theta_{d}}\partial_{\theta_{d}}\left(\sin^{d-1}\theta_{d}\,\partial_{\theta_{d}}\right)+\frac{\alpha_d(\alpha_d-1)}{\cos^2\theta_d}+\frac{1}{\sin^2\theta_d}H^{(R)}_{(d-1)},\\ &H^{(R)}_1=-\partial^2_{\theta_1}+\frac{\alpha_1(\alpha_1-1)}{\cos^2\theta_1}+\frac{\alpha_0(\alpha_0-1)}{\sin^2\theta_1}. \end{align} The variables separate, and if one substitutes the wave function in the form \begin{equation} \Psi(\theta_1,\cdots,\theta_d)=\prod_{p=1}^{d}\frac{\chi_p(\theta_p)}{\sin^{(p-1)/2}\theta_p}, \label{sepvar} \end{equation} one obtains a recursive family of one-dimensional eigenvalue problems, all of which are of the P\"oschl-Teller form: \begin{align}\label{varsep} &\left[-\partial_{\theta_d}^2+\frac{\alpha_d(\alpha_d-1)}{\cos^2\theta_d}+\left(\frac{(d-2)^2-1}4+E_{d-1}\right)\frac1{\sin^2\theta_d}-\frac{(d-1)^2}4\right]\chi_d=E_d\chi_d,\\ &\left[-\partial^2_{\theta_1}+\frac{\alpha_1(\alpha_1-1)}{\cos^2\theta_1}+\frac{\alpha_0(\alpha_0-1)}{\sin^2\theta_d}\right]\chi_1=E_1\chi_1,\nonumber \end{align} where $E_d$ are eigenvalues of $H^{(R)}_d$. Each subsequent equation introduces one new quantum number which we shall denote $n_d$. The recursive solution of (\ref{varsep}) proceeds as follows. First, the solution at $d=1$ is given by (\ref{PTenergy}) as \begin{equation} E_1(n_1)=(\alpha_0+\alpha_1+2n_1)^2. \end{equation} At $d=2$, one gets \begin{equation} \left[-\partial_{\theta_2}^2+\frac{\alpha_2(\alpha_2-1)}{\cos^2\theta_2}+\frac{(\alpha_0+\alpha_1+2n_1+\frac12)(\alpha_0+\alpha_1+2n_1-\frac12)}{\sin^2\theta_2}-\frac{1}4\right]\chi_2=E_2\chi_2. \end{equation} Hence, \begin{equation} E_2(n_1,n_2)=\left(\alpha_0+\alpha_1+\alpha_2+2n_1+2n_2+\frac12\right)^2-\frac14. \end{equation} The general pattern can now be guessed as \begin{equation} E_d(n_1,\cdots,n_d)=\left(\alpha_0+\cdots+\alpha_d+2n_1+\cdots+2n_d+\frac{d-1}2\right)^2-\frac{(d-1)^2}4. \label{rosenergy} \end{equation} It is straightforward to prove inductively that this expression persists under the recursion given by (\ref{varsep}). Note that (\ref{rosenergy}) is manifestly of the form (\ref{quadrenergy}). A classical version of the same construction, recursively expressing the superintegrable Rosochatius Hamiltonian through the action-angle variables has been given in \cite{rossup2}. \subsection{Klein-Gordonization of the superintegrable Rosochatius system} To demonstrate how the geometrization procedure we have proposed above operates, we shall now apply it to the superintegrable Rosochatius system. For the purposes of demonstration, we shall use the simplest formulation outlined in section \ref{massless}, which allows one to utilize known explicit solutions for ground state wavefunctions to construct the relevant massless Klein-Gordon (wave) equation. The only technical input we shall need is the form of the ground state wavefunction of the P\"oschl-Teller Hamiltonian (\ref{HPT}) given by \begin{equation} \psi_0=\cos^\mu x\sin^\nu x. \label{psi0} \end{equation} (This form satisfies the standard boundary conditions for physical wavefunctions only for $\mu\ge0$ and $\nu\ge0$. If not, $\mu$ must be replaced by $1-\mu$, and correspondingly for $\nu$. This is, however, completely irrelevant for our application of $\psi_0$ to construct geometrical conformal factors, and the above form, without any modifications, is perfectly suitable for our purposes.) From (\ref{psi0}) and the recursive construction (\ref{sepvar}-\ref{rosenergy}), one gets for the ground state wavefunction of the superintegrable Rosochatius Hamiltonian (\ref{rosH}) \begin{equation} \Psi_0(\theta_1,\cdots,\theta_d)=\prod_{p=1}^d\left[\left(\cos\theta_p\right)^{\alpha_p}\left(\sin\theta_p\right)^{\alpha_0+\alpha_1+\cdots+\alpha_{p-1}}\right]. \end{equation} On the other hand, $B$ defined by (\ref{quadrenergy}) can be read off (\ref{rosenergy}) as \begin{equation} B=\alpha_0+\alpha_1+\cdots+\alpha_d+\frac{d-1}2. \end{equation} We can hence directly apply the algorithm of section \ref{massless} by introducing \begin{equation} \omega=\prod_{p=1}^d\left[\left(\cos\theta_p\right)^{\alpha_p}\left(\sin\theta_p\right)^{\alpha_0+\alpha_1+\cdots+\alpha_{p-1}}\right]. \end{equation} under the assumption that \begin{equation} \alpha_0+\alpha_1+\cdots+\alpha_d+\frac{d-1}2=0. \end{equation} This yields a $d$-parameter family of spacetimes given by (\ref{gsol}) whose massless wave equations possess perfectly resonant spectra and geometrize the superintegrable Rosochatius problem: \begin{equation} ds^2=\omega^{\frac{4}{d-1}}\left(-dt^2+ds^2_{S^d}\right). \label{rosmetr} \end{equation} (Note that setting $\alpha_d=-(d-1)/2$ and the rest of $\alpha_p$ to 0 returns the case of Higgs oscillator with the coupling strength corresponding to zero mass in the Klein-Gordon equation, while (\ref{rosmetr}) becomes the AdS metric.) For a final statement of our result, it is convinient to reparametrize $\alpha_p$ as \begin{equation} \alpha_p=-\frac{d-1}2\beta_p\quad\mbox{for}\quad p\ge 1,\qquad \alpha_0=-\frac{d-1}2\left(1-\beta_1-\cdots-\beta_d\right). \end{equation} In terms of $\beta_p$, (\ref{rosmetr}) becomes \begin{equation} ds^2= \frac{-dt^2+ds^2_{S^d}}{\displaystyle\prod_{p=1}^d\left[\left(\cos\theta_p\right)^{2\beta_p}\left(\sin\theta_p\right)^{2(1-\beta_p-\cdots-\beta_{d})}\right]}. \label{rosmetr_final} \end{equation} This evidently agrees with (\ref{AdSHiggs}) when $\beta_d=1$ and the rest of $\beta_p$ are zero. \section{Outlook} We have presented a procedure (``Klein-Gordonization'') associating to quantum systems of the form (\ref{Hamlt}) a Klein-Gordon equation on a static spacetime given by (\ref{gsol}). For systems with the quadratic energy spectrum (\ref{quadrenergy}), our procedure results in spacetimes with a resonant spectrum of evenly spaced frequencies (\ref{omegaN}). This correspondence generalizes the previously known relation between the Higgs oscillator (\ref{higgsH}) and (global) Anti-de Sitter spacetime (\ref{AdSHiggs}). Implementing our procedure in practice requires solving a nonlinear elliptic equation, which can be written as (\ref{confgamma}) or (\ref{yma}). The latter form is closely reminiscent of elliptic equations extensively studied in relation to classic `prescribed scalar curvature' problems of differential geometry (though the exact power appearing in the power-law nonlinearity is different). If one aims at constructing a massless Klein-Gordon (i.e., wave) equation corresponding to the original quantum-mechanical system, the nonlinearity drops out, resulting in a much simpler problem. In this case, known ground state wavefunctions for the original quantum system can be utilized for the conversion procedure, as described in section \ref{massless}. We have demonstrated how this approach works for superintegrable Rosochatius systems (\ref{rosH}), resulting in a family of spacetimes (\ref{rosmetr_final}) resonant with respect to the massless wave equation. We conclude with a list of open questions relevant for our formalism: \begin{itemize} \item General theory of existence of solutions of (\ref{yma}) would contribute appreciably to clarifying the operation of our formalism. Similar equations arising in differential geometry \cite{prescr} have been thoroughly analyzed, hence one should expect that the situation for our equation may as well be elucidated. \item In practical applications of our formalism, we have focused on the case of zero Klein-Gordon mass, where (\ref{yma}) greatly simplifies. Are there any general technics for solving this equation (rather than analyzing the existence of solutions) for non-zero masses (at least, for solvable potentials in the original quantum-mechanical system). \item Equation (\ref{yma}) may in principle admit multiple solutions, given that there is freedom in choosing boundary conditions, depending on which conformal transformation one allows. Singular conformal transformations may also be allowed (and they may push boundaries at finite distance off to infinity). This is in fact the case for the AdS construction starting from the Higgs oscillator. It would be good to quantify this freedom in choosing solutions of (\ref{yma}) and understand which prescriptions result in spacetimes interesting from a physical perspective. \item Systems with quadratic spectra exist in extentions of the class of Hamiltonians we have considered here, given by (\ref{Hamlt}). For example, it is possible to include effects of monopole fields without distorting the spectrum \cite{monopole}. Klein-Gordonization is likely to generalize to such systems, resulting in Klein-Gordon equations with background gauge fields. \item It would be interesting to understand how the spacetimes resulting from our construction, such as (\ref{rosmetr_final}), function in the context of dynamical theories of gravity. For instance, Anti-de Sitter spacetime solves Einstein's equations with a negative cosmological constant. More complicated spacetimes may require some matter fields to be supported as solutions. In the context of dynamical theories, the resonant linear spectra of our spacetimes will guarantee that weakly nonlinear dynamics of their perturbations is highly sophisticated. (Nonlinear instability of AdS, which is precisely a manifestation of such phenomena, is a broad currently active research area.) \item What are the symmetry properties of spacetimes generated by ``Klein-Gordonization''? How do they connect to the symmetries of the original quantum-mechanical problem (and in particular hidden symmetries)? Again, for the case of the Higgs oscillator, this perspective has turned out to be fruitful, and it would be good to see how it works in more general cases. \end{itemize} \section{Acknowledgments} The work of O.E.\ is funded under CUniverse research promotion project by Chulalongkorn University (grant reference CUAASC). O.E.\ furthermore thanks Marian Smoluchowski Institute of Physics in Krakow and support from Polish National Science Centre grant no.\ DEC-2012/06/A/ST2/00397, as well as Instituto de Fisica Teorica (IFT UAM-CSIC) in Madrid for its support via the Centro de Excelencia Severo Ochoa Program under Grant SEV-2016-0597 during collaboration visits while this work was in progress, and specifically Piotr Bizo\'n and Antonio Gonz\'alez-Arroyo for hospitality and discussions. The work of H.D. and A.N. was partially supported by the Armenian State Committee of Science Grant No. 15T-1C367 and was done within the ICTP programs NT04 and AF04. We furthermore thank the anonymous referee for providing an extremely detailed report with a number of stimulating suggestions.
2008.13260
\section{Introduction} A $k$-coloring of a graph $G=(V,E)$ is a surjective function from the vertex set $V$ into a color set of cardinality $k$, usually denoted by $\{0,1,\ldots,k-1\}$. This coloring is called perfect if for any $i,j$ the number of vertices of color $j$ in the neighbourhood of vertex $x$ of color $i$ depends only on $i$ and $j$, but not on the choice of $x$. An equivalent concept is an equitable $k$-partition, which is a partition of the vertex set $V$ into cells $V_0,\ldots,V_{k-1}$, where these cells are the preimages of the colors of some perfect $k$-coloring. Also the perfect colorings are the particular cases of the perfect structures, see e.g. \cite{Tar:perfstruct}. In this paper, we consider perfect colorings in Hamming graphs $H(n,q)$ (mainly focusing on the case $q=2,3,4$) and Doob graphs $D(m,n)$. Remind that the Hamming graph $H(n,q)$ is the direct product of $n$ copies of the complete graph $K_q$ on $q$ vertices, and the Doob graph $D(m,n)$, where $m>0$, is the direct product of $m$ copies of the Shrikhande graph and $n$ copies of $K_4$. These graphs are distance-regular; moreover, the Doob graph $D(m,n)$ has the same intersection array as $H(2m+n,4)$. Many combinatorial objects can be defined as perfect colorings with corresponding parameters, for example, MDS codes with distance $2$; latin squares and latin hypercubes; unbalanced boolean functions attending the correlation-immunity bound \cite{FDF:CorrImmBound}; orthogonal arrays attaining the Bierbrauer--Friedman bound \cite{Bierbrauer:95,Friedman:92}; boolean-value functions on Hamming graphs and orthogonal arrays that attach some other bounds \cite{Pot:2012:color,Pot:2010:correng,Kro:OA1536}; some binary codes attending the linear-programming bound that are cells of equitable partitions into 4, 5, or 6 cells \cite{Kro:2m-3,Kro:2m-4}. One important class of objects that corresponding to perfect colorings is the $1$-perfect codes. It is generally known \cite[Ch.~6, Th.~37]{MWS} that if $q=p^m$ is a prime power, then there is a $1$-perfect code in $H(n,q)$ if and only if $n=(q^l-1)/(q-1)$ for some positive integer $l$. In the case when $q$ is not prime power, there is \emph{very little known} about the existence of $1$-perfect codes. It is known that there are no $1$-perfect codes in $H(7,6)$ \cite[Theorem~6]{GolombPosner64} (since there are no pair of orthogonal latin squares of order $6$). Heden and Roos obtained the necessary condition \cite{HedRoos:2011} on the non-existence of some $1$-perfect codes, which in particular implies the non-existence of $1$-perfect codes in $H(19,6)$. Also we mention result of Lenstra \cite{Lenstra72}, which generalized Lloyd's condition (see \cite{Lloyd,MWS}) for a non-prime power $q$. This result implies that if there is a $1$-perfect code in $H(n,q)$, then $n=kq+1$. Krotov \cite{Kro:pfdoob} completely solved the problem of the existence of $1$-perfect codes in Doob graphs. Namely, he proved that there is a $1$-perfect code in $D(m,n)$ if and only if $2m+n=(4^l-1)/3$ for some positive integer $l$. Note that the existence of a $1$-perfect code in $D(m,n)$ not always implies the existence of linear or additive $1$-perfect codes in this graph (the set of admissible parameters of unrestricted $1$-perfect codes in Doob graphs is essentially wider than that of linear \cite{Kro:perfect-doob} or additive \cite{SHK:addperfdoob} $1$-perfect codes). Another important class of codes corresponding to perfect colorings is the completely regular codes. A code $C$ is completely regular if the distance coloring with respect to $C$ (a vertex $v$ has the color that is equal to the distance from $v$ to $C$) is perfect. These codes originally were defined by Delsarte \cite{Delsarte:1973}, but here we use the different equivalent definition from \cite{Neu:crg}. For more information about completely regular codes and problem of its existence, we refer to the survey \cite{BRZ:crg}, papers \cite{FDF:PerfCol, BKMTV:perfcolinham} (for codes with covering radius $\rho=1$), and the small-value tables of parameters \cite{KKM:smallval}. In the current paper, we \emph{stay} on the class of completely regular codes that correspond to extended $1$-perfect codes. An extended $1$-perfect code is a code with code distance $4$ obtained by appending an additional symbol (this operation we call an extension) to the codewords of some $1$-perfect code (the rigorous definition will be given in the next section). It is known that there is an extended $1$-perfect code in $H(2^m,2)$ and in $H(2^m+2,2^m)$ for any positive integer $m$ (see~\cite{MWS, BRZ:crg}). It was mentioned in \cite[Section~4]{AhlAydKha} that a result from \cite{Hill:Caps} implies the non-existence of extended $1$-perfect codes in $H(n,q)$ obtained from the Hamming codes, except the case when $(n,q)=(2^m,2)$ or $(n,q)=(2^m+2,2^m)$. An extended $1$-perfect code in $H(q+2,q)$ (or in $D(m,n)$, where $2m+n=6$) is also an MDS code with distance $4$. There is a characterization of all extended $1$-perfect codes in $H(6,4)$ \cite{Alderson:MDS4} and in Doob graphs $D(m,n)$ \cite{BesKro:mdsdoob}, where $2m+n=6$, $m>0$. Ball showed \cite{Ball:2012:1} that if $q$ is odd prime, then there are no linear extended $1$-prefect codes in $H(q+2,q)$. The non-existence of extended $1$-perfect codes in $H(7,5)$ and $H(9,7)$ was proved in \cite{KKO:smallMDS}. In \cite{KokOst:further} it was shown that any extended $1$-perfect code in $H(10,8)$ is equivalent to a linear code. The non-existence of extended $1$-perfect codes in $H(14,3)$ follows from the bound established in \cite{GST:newupbounds}. For completeness, note that formally codes consisting from one vertex in $H(2,q)$ are also extended $1$-perfect codes. Such codes are called trivial. In this paper, we obtain a necessary condition for the existence of perfect colorings in Hamming graphs $H(n,q)$, where $q=2,3,4$, and Doob graphs. We apply it to extended $1$-perfect codes and prove that there are no such codes in $H(n,q)$, $q=3,4$, $n>q+2$, and in $D(m,n)$, $2m+n>6$. This completes the characterization of such codes in these graphs (see Theorem~\ref{t:parameters}). In addition, we prove that extended $1$-perfect codes can exist in $H(n,q)$ only if $n$ is even, which particularly implies the non-existence of some MDS codes with distance $4$. We hope that this method can be applied for a proof of the non-existence of some other perfect $k$-colorings (but for perfect $2$-colorings it does not add something new to results from \cite{BKMTV:perfcolinham}). The paper is organized as follows. In Section~\ref{s:prelim}, we give main definitions and simple observations. In Section~\ref{s:necessary}, we obtain a necessary condition (Theorem~\ref{t:nessesary}) for the existence of perfect colorings in Doob graphs and Hamming graphs $H(n,q)$, where $q=2,3,4$. In Section~\ref{s:extarecrg}, we prove that any extended $1$-perfect code in $H(n,q)$ is a completely regular code with intersection array $(n(q-1),(n-1)(q-1);1,n)$, and vise versa; similar results are shown for Doob graphs. This allows us to apply Theorem~\ref{t:nessesary} to prove the non-existence of some extended $1$-perfect codes in Section~\ref{s:nonexist}. Finally, we describe all parameters for which there is an extended $1$-perfect code in $D(m,n)$ and $H(n,3)$ in Theorem~\ref{t:parameters}. \section{Preliminaries}\label{s:prelim} Given a graph $G$, we denote by ${\scriptscriptstyle\mathrm{V}}{G}$ its vertex set. A surjective function $f: {\scriptscriptstyle\mathrm{V}}{G} \to \{0,1,\ldots,k-1\}$ on the vertex set of $G$ is called a \emph{$k$-coloring} of a graph $G$ in the colors $0,1,\ldots,k-1$. If for all $i,j$ every vertex $x$ of color $i$ has exactly $s_{i,j}$ neighbours of color $j$, where $s_{i,j}$ does not depend on the choice of $x$, then the coloring $f$ is called a \emph{perfect $k$-coloring} with \emph{quotient matrix} $S=(s_{i,j})$. Let $G$ be a connected graph. A \emph{code} $C$ in $G$ is an arbitrary nonempty subset of ${\scriptscriptstyle\mathrm{V}}{G}$. The \emph{distance} $d(x,y)$ between two vertices $x$ and $y$ is the length of the shortest path between $x$ and $y$. The \emph{code distance} $d$ of a code $C$ is the minimum distance between two different vertices of $C$. The distance $d(A,B)$ between two sets of vertices $A$ and $B$ equals $\min\{d(x,y):x \in A, y \in B \}$. The \emph{covering radius} of a code $C$ is $\rho=\max\limits_{v \in {\scriptscriptstyle\mathrm{V}}{G}}\{d(\{v\}, C)\}$. Let $C$ be a code in a graph $G$. The \emph{distance coloring} with respect to $C$ is the coloring $f$ defined in the following way: $f(x)$ is equal to the distance between $\{x\}$ and $C$. If $f$ is a perfect coloring with quotient matrix $S$, then $C$ is called a \emph{completely regular code} with quotient matrix $S$. In this case, the matrix $S$ is tridiagonal. A connected regular graph $G$ is called \emph{distance-regular} if for any vertex $x$ of $G$ the set $\{x\}$ is a completely regular code with quotient matrix $S$ that does not depend on the choice of $x$. The sequence $(b_0,\ldots,b_{\rho-1};c_1,\ldots,c_{\rho})=(s_{0,1},\ldots,s_{\rho-1,\rho};s_{1,0},\ldots,s_{\rho,\rho-1})$ is called the \emph{intersection array}. The \emph{Shrikhande graph} $Sh$ is a Cayley graph with the vertex set $\mathbb Z^2_4$, where two vertices $x$ and $y$ are adjacent if and only if their difference $(x-y)$ belongs to the connecting set $\{01,10,03,30,11,33\}$. The complete graph $K_q$ on $q$ vertices can be represented as a Cayley graph, where the vertex set is $\mathbb Z_q$, and two vertices $x$ and $y$ are adjacent if and only if their difference $(x-y)$ belongs to the connecting set $\{1,2,\ldots,q-1\}$. The Hamming graph $H(n,q)$ is the direct product $K^n_q$ of $n$ copies of $K_q$. The vertex set of $H(n,q)$ can be represented as $\mathbb Z^n_q=\{(x_1,\ldots,x_n):x_i \in \mathbb Z_q\}$. Denote by $D(m,n)=Sh^m \times K^n_4$ the direct product of $m$ copies of the Shrikhande graph $Sh$ and $n$ copies of the complete graph $K_4$. If $m>0$, then this graph is called \emph{Doob graph}. The vertex set of $D(m,n)$ can be represented as $(\mathbb Z^2_4)^m \times \mathbb Z^n_4=\{(x_1,\ldots,x_m;y_1,\ldots,y_n): x_i \in \mathbb Z^2_4, y_j \in \mathbb Z_4\}$. The Hamming graph $H(n,q)$ is distance-regular with intersection array $(n(q-1),(n-1)(q-1),\ldots,q-1;1,2,\ldots,n)$. The Doob graph $D(m,n)$ is distance-regular with the same intersection array as $H(2m+n,4)$. For a vertex $v=(x_1,\ldots,x_{n-1})$ of $H(n-1,q)$ and $a \in \mathbb Z_q $, denote by $v^a_i=(x_1,\ldots,x_{i-1},a,x_{i},\ldots,x_{n-1})$ the vertex of $H(n,q)$. Analogously, for a vertex $v=(x_1,\ldots,x_m;y_1,\ldots,y_{n-1})$ of $D(m,n-1)$ and $a \in \mathbb Z_4$, denote by \linebreak $v^a_{;i}=(x_1,\ldots,x_m;y_1,\ldots,y_{i-1},a,y_{i},\ldots,y_{n-1})$ the vertex of $D(m,n)$. The \emph{projection} (also known as puncturing) $C_i$ of a code $C$ in $H(n,q)$ is the code in $H(n-1,q)$ defined as follows: $$C_i=\{v \in {\scriptscriptstyle\mathrm{V}}{H(n-1,q)}: v^a_i \in C \text{ for some } a \in \mathbb Z_q\}.$$ Similarly, the projection $C_{;i}$ of a code $C$ in the Doob graph $D(m,n)$, $n>0$, is the code in $D(m,n-1)$ defined as follows $$C_{;i}=\{v \in {\scriptscriptstyle\mathrm{V}}{D(m,n-1)}: v^a_{;i} \in C \text{ for some } a \in \mathbb Z_4\}.$$ By $B_e(x)=\{y:d(x,y) \le e\}$ denote the radius $e$-ball with center $x$. A code $C$ in a graph $G$ is called \emph{$e$-perfect} if $|C \cap B_e(x)|=1$ for any $x \in {\scriptscriptstyle\mathrm{V}}{G}$. In equivalent definition, an $e$-perfect code is a code with code distance $d=2e+1$, whose cardinality achieves the sphere-packing bound. It is known that if $q=p^m$ is a prime power, then a $1$-perfect code in $H(n,q)$ exists if and only if $n=(q^l-1)/(q-1)$ for some positive integer $l$ \cite{MWS}. The cardinality of this code is equal to $q^{n-l}$. It is also known that a $1$-perfect code in $D(m,n)$ exists if and only if $2m+n=(4^l-1)/3$ for some positive integer $l$ \cite{Kro:pfdoob}. The cardinality of this code is equal to $4^{2m+n-l}$. A code $C$ in $H(n,q)$ is called an \emph{extended $1$-perfect code} if its code distance is equal to $4$ and the projection $C_i$ in some position $i$ is a $1$-perfect code. If a code $C$ has distance $d>1$, then its projection has distance at least $d-1$ and the same cardinality. Therefore, if $C$ is an extended $1$-perfect code, then the projection $C_i$ is a $1$-perfect code for any $i=1,\ldots,n$. So, if $q=p^m$ is a prime power, then an extended $1$-perfect code in $H(n,q)$ can exist only for $n=(q^l+q-2)/(q-1)$, $l \in \mathbb N$. The cardinality of such code equals $q^{n-l-1}$. Similarly, a code $C$ in $D(m,n)$ is called an \emph{extended $1$-perfect code} if its code distance equals $4$ and the projection $C_{;i}$ for some position $i$ is a $1$-perfect code. So an extended $1$-perfect code in $D(m,n)$ can exist only if $2m+n=(4^l+2)/3$, $l \in \mathbb N$. The cardinality of such code equals $4^{2m+n-l-1}$. If $n=0$, then a code $C$ in $D(m,0)$ is called an \emph{extended $1$-perfect code} if it has the same parameters as an extended $1$-perfect code in Doob graph of the same diameter, i.e. $2m=(4^l+2)/3$, the code distance is equal to $4$, $|C|=4^{2m-l-1}$. \section{A necessary condition for the existence of perfect colorings}\label{s:necessary} Given a graph $G$, let us consider the set of complex-valued functions $f: {\scriptscriptstyle\mathrm{V}}{G} \to \mathbb C$ on the vertex set. These functions form a vector space $U(G)$ with the inner product $(f,g)=\sum\limits_{x \in {\scriptscriptstyle\mathrm{V}}{G}} f(x)\overline{g(x)}$. A function $f:{\scriptscriptstyle\mathrm{V}}{G} \to \mathbb C$ is called an \emph{eigenfunction} of $G$ if $Mf=\lambda f$, $f \not\equiv 0$, where $M$ is the adjacency matrix of $G$, for some $\lambda$, which is called an \emph{eigenvalue} of $G$. Denote by $U_{\lambda}=\{f:Mf=\lambda f\}$ the eigensubspace corresponding to $\lambda$. Let $G$ be a Hamming graph $H(n,q)$ or a Doob graph $D(m,n)$. Then it is convenient to use the characters to form a basis of each eigensubspace. Let $\xi$ be the $q$-th root of unity, namely $\xi=e^{\frac{2\pi \sqrt{-1}}{q}}$. If $G$ is $H(n,q)$, then for an arbitrary $z \in \mathbb Z^n_q$ define the function $\varphi_z(t)=\frac{\xi^{\langle z,t \rangle}}{q^{n/2}}$, where $\langle v,u \rangle =v_1u_1+\ldots+v_nu_n \mod q$. If $G$ is $D(m,n)$, then for an arbitrary $z \in (\mathbb Z^2_4)^m \times \mathbb Z^n_4$ define the function $\varphi_z(t)=\frac{\xi^{\langle z,t \rangle}}{4^{(2m+n)/2}}$, where $\langle x,v \rangle =(x_{1}v_{1}+y_{1}u_{1})+\ldots+(x_{m}v_{m}+y_{m}u_{m})+r_1s_1+\ldots+r_ns_n \mod 4$; $x=([x_{1},y_{1}],\ldots,[x_{m},y_{m}];r_1,\ldots,r_n)$ and $v=([v_{1},u_{1}],\ldots,[v_{m},u_{m}];s_1,\ldots,s_n)$ are vertices in $D(m,n)$ (we denote by $[a,b]$ an element of $\mathbb Z^2_4$). It is known that the functions $\varphi_z$, where $z \in \mathbb {\scriptscriptstyle\mathrm{V}}{G}$, are eigenfunctions of $G$ and these functions form an orthonormal basis of the vector space $U(G)$. \begin{lemman}\label{l:mffs} Let $f$ be a perfect $k$-coloring of a graph $G$ with quotient matrix $S$. Let $f_j=\chi_{f^{-1}(j)}$ be the characteristic function of the set of vertices of color $j$. Then for any $t \in \mathbb N$ $$(M^tf_j,f_j)=s^t_{j,j} \cdot |f^{-1}(j)|,$$ where $M$ is the adjacency matrix of $G$, and $s^t_{j,j}$ is the $(j,j)$-th element of the matrix $S^t$. \end{lemman} \begin{proof} Let $F=(f_0,\ldots,f_{k-1})$ be the $|{\scriptscriptstyle\mathrm{V}}{G}| \times k$ matrix, where the $i$-th column $f_i=\chi_{f^{-1}(i)}$ is the characteristic function of the set of vertices of color $i$. It is known that $MF=FS$ (see for example \cite[Section~5.2]{Godsil93}), and consequently $M^tF=FS^t$ for any $t$. Hence, $(M^tf_j)(x)=(M^tF)_{x,j}=(FS^t)_{x,j}=s^t_{f(x),j}$ for any vertex $x \in {\scriptscriptstyle\mathrm{V}}{G}$. Since $f_j(x)=0$ if $f(x) \ne j$, we have $(M^tf_j,f_j)=s^t_{j,j} \cdot |f^{-1}(j)|$. \end{proof} \begin{lemman}\cite[Section.~5.2]{Godsil93}.\label{l:eigenvalues} Let $f$ be a perfect coloring of a graph $G$ with quotient matrix $S$. If $\lambda$ is an eigenvalue of $S$, then $\lambda$ is an eigenvalue of $G$. \end{lemman} \begin{theoreman}\label{t:nessesary} Let $G$ be the Hamming graph $H(n,q)$, where $q \in \{2,3,4\}$, or the Doob graph $D(m,n)$. Let $f$ be a perfect $k$-coloring of $G$ with quotient matrix $S$ that has eigenvalues $\lambda_0 > \lambda_1 > \ldots > \lambda_{l}$. Let $i$ be a color of $f$, and let $s^t_{i,i}$ be the $(i,i)$-th element of $S^t$, $t=1,\ldots,l-1$. Then the linear system of equations \[\displaystyle{\begin{pmatrix} 1 & 1 & \ldots & 1 \\ \lambda_1 & \lambda_2 & \ldots & \lambda_l \\ \lambda^2_1 & \lambda^2_2 & \ldots & \lambda^2_l \\ \ldots & \ldots & \ldots & \ldots \\ \lambda^{l-1}_1 & \lambda^{l-1}_2 & \ldots & \lambda^{l-1}_l \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_2 \\ \ldots \\ x_{l} \end{pmatrix} = |f^{-1}(i)| \begin{pmatrix} 1\\ s^1_{i,i} \\ s^2_{i,i} \\ \ldots \\ s^{l-1}_{i,i} \end{pmatrix} -\frac{|f^{-1}(i)|^2}{|{\scriptscriptstyle\mathrm{V}}{G}|} \begin{pmatrix} 1 \\ \lambda_0 \\ \lambda^2_0 \\ \ldots \\ \lambda^{l-1}_0 \end{pmatrix}} \] has a unique solution $(a_1,\ldots,a_l)$. Moreover, $a_j \cdot |{\scriptscriptstyle\mathrm{V}}{G}|$ is a non-negative integer for $j=1,\ldots,l$. \end{theoreman} \begin{proof} The matrix of the system is a transposition of Vandermonde matrix, so the determinant is not equal to $0$. Hence the system has a unique solution. Let $f_i=\chi_{f^{-1}(i)}$ be the characteristic function of color $i$. By Lemma~\ref{l:eigenvalues} eigenvalues $\lambda_0,\ldots, \lambda_l$ are eigenvalues of $G$. It is known that $f_i$ belongs to the direct sum of the eigensubspaces corresponding to the eigenvalues of $S$, i.e., $f_i \in U_{\lambda_0} \oplus \ldots \oplus U_{\lambda_l}$ (see for example \cite[Property~9]{Tar:perfstruct}). So, $$f_i=\sum\limits_{z: \varphi_z \in U_{\lambda_{1}}}\alpha_z\varphi_z+ \ldots+ \sum\limits_{z: \varphi_z \in U_{\lambda_{l}}}\alpha_z\varphi_z+\alpha_{\overline{0}} \varphi_{\overline{0}},$$ where $\alpha_z$, $z \in {\scriptscriptstyle\mathrm{V}}{G}$, are complex coefficients. Therefore, $$M^tf_i=\sum\limits_{z: \varphi_z \in U_{\lambda_{1}}}\lambda^t_1\alpha_z\varphi_z+ \ldots+ \sum\limits_{z: \varphi_z \in U_{\lambda_{l}}}\lambda^t_l\alpha_z\varphi_z+\lambda^t_0\alpha_{\overline{0}} \varphi_{\overline{0}}.$$ This representation implies the following relation for $t=0,1,\ldots$: $$(M^tf_i,f_i)=\lambda^t_1\sum\limits_{z: \varphi_z \in U_{\lambda_{1}}}|\alpha_z|^2+ \ldots+ \lambda^t_{l}\sum\limits_{z: \varphi_z \in U_{\lambda_{l}}}|\alpha_z|^2+ \lambda^{t}_0|\alpha_{\overline{0}}|^2.$$ Since the basis is orthonormal, we have $\alpha_{\overline{0}}=(f_i,\varphi_{\overline{0}})=\frac{|f^{-1}(i)|}{|{\scriptscriptstyle\mathrm{V}}{G}|^{1/2}}$, and hence $|\alpha_{\overline{0}}|^2=\frac{|f^{-1}(i)|^2}{|{\scriptscriptstyle\mathrm{V}}{G}|}$. Since $(M^tf_i,f_i)=|f^{-1}(i)| \cdot s^t_{i,i}$ by Lemma~\ref{l:mffs}, it is straightforward that $(a_1,\ldots,a_l)$, where $a_j=\sum\limits_{z: \varphi_z \in U_{\lambda_{j}}}|\alpha_z|^2$, is the solution of the system. On the other hand, as the basis $\{\varphi_z:z \in {\scriptscriptstyle\mathrm{V}}{G}\}$ is orthonormal, we have $\alpha_z=(f_i,\varphi_z)$ for any $z \in {\scriptscriptstyle\mathrm{V}}{G}$. Let us consider subcases. If $q=2$, then for any $z \in \mathbb Z^n_2$ the function $\varphi_z$ has two distinct values: $\frac{\pm 1}{2^{n/2}}$. In this case, $\alpha_z=(f_i,\varphi_z)=\frac{r}{2^{n/2}}$ and $|\alpha_z|^2=\frac{r^2}{2^n}$ for some integer $r$. Hence $a_j 2^n=\sum\limits_{z: \varphi_z \in U_{\lambda_{j}}}2^n|\alpha_z|^2$ is a non-negative integer. If $q=3$, then $\varphi_z$ has three distinct values: $\frac{-1+\sqrt{3}\sqrt{-1}}{2 \cdot 3^{n/2}},\frac{-1-\sqrt{3}\sqrt{-1}}{2 \cdot 3^{n/2}},\frac{1}{3^{n/2}}$. In this case, $\alpha_z=\frac{a+b\sqrt{3}\sqrt{-1}}{2 \cdot 3^{n/2}}$, were $a$ and $b$ are integers and, moreover, they have the same parity. So $|\alpha_z|^2=\frac{a+3b^2}{4\cdot3^{n}}=\frac{r}{3^n}$ for some integer $r$. So $a_j 3^n=\sum\limits_{z: \varphi_z \in U_{\lambda_{j}}}3^n|\alpha_z|^2$ is a non-negative integer. If $G$ is $D(m,n)$, then $\varphi_z$ has four distinct values: $\frac{\pm 1}{4^{(2m+n)/2}},\frac{\pm \sqrt{-1}}{4^{(2m+n)/2}}$. So $\alpha_z=(f_i,\varphi_z)=\frac{a+b\sqrt{-1}}{4^{(2m+n)/2}}$ for some integers $a$ and $b$. Hence $|\alpha_z|^2=\frac{r}{4^{2m+n}}$ for some integer $r$. Hence $a_j 4^{2m+n}=\sum\limits_{z: \varphi_z \in U_{\lambda_{j}}}4^{2m+n}|\alpha_z|^2$ is a non-negative integer. \end{proof} \section{Extended perfect codes are completely regular}\label{s:extarecrg} \begin{theoreman}\label{t:extiscomp} \begin{enumerate} \item A code $C$ in $H(n,q)$ is extended $1$-perfect if and only if $C$ is completely regular with quotient matrix \[\displaystyle{\begin{pmatrix} 0 \ \ \ & n(q-1) & 0 \\ 1 \ \ \ & q-2 & (n-1)(q-1) \\ 0 \ \ \ & n & n(q-2) \end{pmatrix}}.\] \item A code $C$ in $D(m,n)$ is extended $1$-perfect if and only if $C$ is completely regular with quotient matrix \[\displaystyle{\begin{pmatrix} 0 \ \ \ & 6m+3n & 0 \\ 1 \ \ \ & 2 & 6m+3n-3 \\ 0 \ \ \ & 2m+n & 4m+2n \end{pmatrix}}.\] \end{enumerate} \end{theoreman} \begin{proof} In most parts, the proof for $D(m,n)$ is similar to the proof for $H(2m+n,4)$. So we mainly focus on Hamming graphs, and we consider Doob graphs only in the cases for which the proof is different. Let $C$ be an extended $1$-perfect code in $H(n,q)$ ($D(m,n)$). Let $f$ be the distance coloring of $H(n,q)$ with respect to $C$, i.e. $f(x)=\min\limits_{y \in C} \{d(x,y)\}$, $x \in {\scriptscriptstyle\mathrm{V}}{H(n,q)}$. Since the projection of $C$ in any position is a $1$-perfect code that has the covering radius $1$, the covering radius of $C$ equals $2$ hence the set of colors is $\{0,1,2\}$. Define the following $s^i_j: f^{-1}(i) \to \mathbb Z$, where $s^i_j(x)$ is the number of vertices of color $j$ in the neighbourhood of $x$, if $f(x)=i$, and otherwise is not defined. So, $f$ is a perfect coloring if and only if $s^i_j$ is constant for all $i,j \in \{0,1,2\}$. Obviously, $s^0_0 \equiv 0$ (as the code distance is $4$), and $s^0_2 \equiv 0$, $s^2_0 \equiv 0$ (by the definition). Let $y$ be an arbitrary vertex of color $1$. Let us count the values $s^1_0(y)$ and $s^1_1(y)$. On the one hand, $s^1_0(y) \ge 1$ by the definition. On the other hand, $s^1_0(y) \le 1$ (otherwise we have a contradiction with the code distance). Hence $s^1_0 \equiv 1$. Therefore, for any vertex $x$ of color $1$ we can denote by $o(x)$ the unique neighbour of $x$ that has color $0$. Any vertex $y'$ of color $1$ that adjacent to $y$ belongs to the neighbourhood of $o(y)$ (indeed, if $o(y) \ne o(y')$, then $d(o(y),o(y')) \le 3$ that contradicts the code distance). Therefore, all neighbours of $y$ that have color $1$ belong to the neighbourhood of $o(y)$. The number of common neighbours of two arbitrary adjacent vertices in a distance-regular graph is uniquely determined by the intersection array. For $H(n,q)$, it is equal to $q-2$ and for $D(m,n)$, to $2$. Hence $s^1_0 \equiv 1$ and $s^1_1 \equiv q-2$ ($s^1_1 \equiv 2$ for a Doob graph). For each vertex $x \in {\scriptscriptstyle\mathrm{V}}{H(n,q)}$, we have $s^i_0(x)+s^i_1(x)+s^i_2(x)=n(q-1)$, where $i$ is the color of $x$. Therefore, $s^0_1 \equiv n(q-1)$ and $s^1_2 \equiv (n-1)(q-1)$. It remains to prove that $s^2_1 \equiv n$ ($s^2_1 \equiv 2m+n$ for $D(m,n)$). An edge $\{v,u\}$ is called an \emph{$(i,j)$-edge} if $v$ has color $i$ and $u$ has color $j$, or vice versa. Denote $\displaystyle{\alpha=\sum\limits_{x \in f^{-1}(2)} s^2_1(x)}$, i.e. the number of $(1,2)$-edges. Let us calculate the values $|f^{-1}(0)|$, $|f^{-1}(1)|$ and $|f^{-1}(2)|$. The first value is equal to the cardinality of a $1$-perfect code in $H(n-1,q)$, i.e. $\displaystyle{\frac{q^{n-1}}{(n-1)(q-1)+1}}$. From the counting of the number of $(0,1)$-edges, we have $|f^{-1}(1)|=|f^{-1}(0)| n(q-1)=\displaystyle{\frac{n(q-1)q^{n-1}}{(n-1)(q-1)+1}}$. Counting the number of $(1,2)$-edges, we find $\alpha = (n-1)(q-1)|f^{-1}(1)|$. On the other hand, \begin{multline*} |f^{-1}(2)|= q^n-|f^{-1}(0)|-|f^{-1}(1)|=\\ q^{n-1}\frac{q((n-1)(q-1)+1)-n(q-1)-1}{(n-1)(q-1)+1}= \\ q^{n-1}\frac{(n-1)(q-1)^2}{(n-1)(q-1)+1} \end{multline*} Hence the average value of $s^2_1$ equals $n$, i.e. $\displaystyle{\frac{\alpha}{|f^{-1}(2)|}=n}$ (or $2m+n$ for $D(m,n)$). Let $v$ be a vertex of color $2$ in $H(n,q)$. The induced subgraph on the set of its neighbours has $n$ connected components, and every component is a $(q-1)$-clique. Hence $s^2_1(v) \le n$ (otherwise there are two vertices $u$ and $w$ of color $1$ in the same component, but all their common neighbours except $v$ also belongs to this component and one of them is $o(u)$, which has color $0$). Since the average value of $s^2_1$ equals $n$, we have $s^2_1 \equiv n$. Let $v=(x_1,\ldots,x_m;y_1,\ldots,y_n)$ be a vertex of color $2$ in $D(m,n)$. Denote by $h_{j,v}$ the induced subgraph on the set $\{(x_1,\ldots,x_m;y_1,\ldots,y_{j-1},b,y_{j+1},\ldots,y_n): b \in \mathbb Z_4\}$. This graph is the complete graph $K_4$. Denote by $d_{i,v}$ the induced subgraph on the vertex set $\{(x_1,\ldots,x_{i-1},a,x_{i+1},\ldots,x_m;y_1,\ldots,y_n): a \in \mathbb Z^2_4\}$. This graph is the Shrikhande graph. Denote by $\alpha_{i,v}$ the number of $(1,2)$-edges in $d_{i,v}$ divided by the number of vertices of color $2$ in $d_{i,v}$. Let us prove that for any $i \in \{1,\ldots,m\}$ and $v \in {\scriptscriptstyle\mathrm{V}}{D(m,n)}$ it follows that $\alpha_{i,v} \le 2$; moreover, if $\alpha_{i,v}=2$, then any vertex of color $2$ in $d_{i,v}$ has exactly two neighbours of color $1$ in $d_{i,v}$. Let $i \in \{1,\ldots,m\}$ and $v \in {\scriptscriptstyle\mathrm{V}}{D(m,n)}$. Consider two cases. If $d_{i,v}$ contains a vertex $u$ of color $0$, then $\alpha_{i,v}=2$. Indeed, all neighbours of $u$ have color $1$ and other $9$ vertices have color $2$ (if some vertex $w$ is at distance $2$ from some vertex of color $0$, then $f(w)=2$; otherwise we have a contradiction with the code distance). So any vertex of color $2$ has two neighbours of color $1$ (because the Shrikhande graph is strongly regular with parameters $(16,6,2,2)$). In the second case, there are no vertices of color $0$ in $d_{i,v}$. Then the vertices of color $1$ form an independent set (indeed, if some vertices $u$ and $w$ are adjacent, then $o(u)$ is their common neighbour, but these vertices have only two common neighbours, which also belong to $d_{i,v}$). So $\alpha_{i,v}=\frac{6x}{16-x}$, where $x$ is the number of vertices of color $1$. A maximum independent set in the Shrikhande graph has cardinality $4$; moreover, the characteristic function of a maximum independent set is a perfect coloring, where any vertex that does not belong to this set is adjacent to $2$ vertices from this set (see \cite[Section~2]{BesKro:mdsdoob}). Hence $\alpha_{i,v} \le 2$; moreover, if $\alpha_{i,v}=2$ ($x=4$), then any vertex of color $2$ has exactly two neighbours of color $1$ in $d_{i,v}$. As before, for any $j\in \{1,\ldots,n\}$ and $v \in {\scriptscriptstyle\mathrm{V}}{D(m,n)}$ any vertex of color $2$ has $0$ or $1$ neighbours of color $1$ in the graph $h_{j,v}$. Since any $(1,2)$-edge in $D(m,n)$ belongs to exactly one subgraph among the subgraphs $d_{i,v}$ and $h_{j,v}$, where $v \in {\scriptscriptstyle\mathrm{V}}{D(m,n)}$, $i=1,\ldots,m$, $j=1,\ldots,n$, we have $\displaystyle{\frac{\alpha}{|f^{-1}(2)|} \le 2m+n}$. Moreover, if $\displaystyle{\frac{\alpha}{|f^{-1}(2)|} = 2m+n}$, then $\alpha_{i,v}=2$ for any $i \in \{1,\ldots,m\}$ and $v \in {\scriptscriptstyle\mathrm{V}}{D(m,n)}$. Hence $s^2_1 \equiv 2m+n$. Let us prove the converse statement for the Hamming graphs (for the Doob graphs the proof is similar). Let $f$ be a perfect $3$-coloring with quotient matrix $S$ from the theorem statement. Let us prove that the code $C=f^{-1}(0)$ is an extended $1$-perfect code. Since $s_{0,0}=0$, the code distance $d$ is at least $2$. Moreover, $s_{1,0}=1$ implies $d \ge 3$. Suppose that there are different vertices $x$ and $y$ in $C$ such that $d(x,y)=3$. In this case, there is a path $(x,v_1,v_2,y)$. The vertices $v_1$ and $v_2$ have color $1$. Since $v_1$ ($v_2$) has $q-2=s_{1,1}$ common neighbours with $x$ ($y$), we have a contradiction with the fact that $v_1$ and $v_2$ are adjacent. So $d \ge 4$. By the counting of $(i,j)$-edges, we have $s_{i,j}|f^{-1}(i)|=s_{j,i}|f^{-1}(j)|$ for any $i, j$. It implies $q^n=|C|(1+n(q-1)+(n-1)(q-1)^2)=|C|q((n-1)(q-1)+1)$. Therefore, a projection of $C$ in any position is a code with the code distance not less than $3$, whose cardinality achieves the sphere-packing bound. So $C$ is an extended $1$-perfect code. \end{proof} The following lemma can be checked directly. \begin{lemman}\label{l:eigen} The matrix \[\displaystyle{\begin{pmatrix} 0 \ \ \ & n(q-1) & 0 \\ 1 \ \ \ & q-2 & (n-1)(q-1) \\ 0 \ \ \ & n & n(q-2) \end{pmatrix}}\] has the following eigenvalues: $\lambda_1=(q-2)$, $\lambda_2=-n$, and $\lambda_0=n(q-1)$. \end{lemman} \section{The non-existence of some extended perfect codes}\label{s:nonexist} Now we can apply Theorem~\ref{t:nessesary} to prove the non-existence of ternary and quaternary extended $1$-perfect codes. \begin{predln}\label{p:extnonexist} \begin{enumerate} \item Let $C$ be an extended $1$-perfect code in $H(n,3)$, where $n=\frac{3^l+1}{2}$, $l \in \mathbb N$. Then $l \le 2$. \item Let $C$ be an extended $1$-perfect code in $D(m,n)$ (including the case $D(0,n)=H(n,4)$), where $2m+n=\frac{4^l+2}{3}$, $l \in \mathbb N$. Then $l \le 3$. \end{enumerate} \end{predln} \begin{proof} 1) Let $C$ be an extended $1$-perfect code in $H(n,3)$, where $n=\frac{3^l+1}{2}$ for some positive integer $l$. The cardinality of $C$ is equal to $3^{n-l-1}$. By Theorem~\ref{t:extiscomp} and Lemma~\ref{l:eigen} the distance coloring with respect to $C$ is a perfect coloring with quotient matrix, which has eigenvalues: $\lambda_1=1$, $\lambda_2=-n$ and $\lambda_0=2n$. Let us consider the system of equations from Theorem~\ref{t:nessesary} \[\begin{cases} a_1+a_2=3^{n-l-1}-3^{n-2l-2}\\ a_1-na_2=-2n \cdot 3^{n-2l-2}. \end{cases}\] From this system we have $$\displaystyle{a_2 \cdot 3^n=\frac{3^{2n-2l-2}(3^{l+1}+3^l)}{\frac{3^l+3}{2}}=\frac{3^{2n-l-3}2^3}{3^{l-1}+1}}.$$ By Theorem~\ref{t:nessesary} the number $a_2 \cdot 3^n$ is integer. Since the denominator $3^{l-1}+1$ and $3^{2n-l-3}$ are relatively prime, it follows that $3^{l-1}+1$ is a divisor of $8$. This implies $l=1$ or $l=2$. 2) Let $C$ be an extended $1$-perfect code in $D(m,n)$, where $2m+n=\frac{4^l+2}{3}$ for some positive integer $l$. In this case, we have the following system of equations \[\begin{cases} a_1+a_2=4^{2m+n-l-1}-4^{2m+n-2l-2}\\ 2a_1-(2m+n)a_2=-3(2m+n)4^{2m+n-2l-2}. \end{cases}\] From this system we have $$\displaystyle{a_2 \cdot 4^{2m+n}=\frac{4^{4m+2n-2l-2}(2 \cdot 4^{l+1}-2+4^l+2)}{\frac{4^l+8}{3}}=\frac{4^{4m+2n-l-3}3^3}{4^{l-1}+2}}.$$ By Theorem~\ref{t:nessesary} the number $a_2 \cdot 4^{2m+n}$ is integer. If $l=1$, then $4^2 \cdot a_2=9$. Let $l>1$. Since the greatest common divisor of the denominator $4^{l-1}+2$ and $4^{4m+2n-l-3}$ equals $2$, it follows that $2 \cdot 4^{l-2}+1$ divides $27$. This implies $l \in \{2,3\}$. So $l \le 3$. \end{proof} The two following propositions solve the remaining cases in $H(n,3)$ and $D(m,n)$, and codes of odd length in $H(n,q)$ for all $q$. The proofs of these propositions are particular cases of the method described in \cite{Kro:struct}. \begin{predln}\label{p:odddiam} Let $C$ be an extended $1$-perfect code in $H(n,q)$. Then $n$ is even. \end{predln} \begin{proof} Let $C$ be an extended $1$-perfect code in $H(n,q)$ and $f$ be the distance coloring with respect to $C$. Consider an arbitrary vertex $a$ of color $2$. Denote by $W^i_j$ the set of vertices of color $i$ that are at the distance $j$ from $a$ and denote $W_j=W^0_j \cup W^1_j \cup W^2_j$. On the one hand, any vertex $x \in W^1_1$ is adjacent to exactly $1$ vertex from $W^0_2$. On the other hand, any vertex $y \in W^0_2$ has $2$ neighbours in $W_1$ and they have color $1$. Hence $|W^1_1|=2|W^0_2|$, and so $|W^1_1|$ is even. But from Theorem~\ref{t:extiscomp} we have $|W^1_1|=n$. \end{proof} Recall that a code $C$ in $H(n,q)$ is called an \emph{MDS code} with distance $d$ if its cardinality achieves the Singleton bound, i.e. $|C|=q^{n-d+1}$. In the case $n=q+2$, the definitions of an extended $1$-perfect code and an MDS code with distance $4$ are equivalent. \begin{coroll} If $q$ is odd, then there are no MDS codes with distance $4$ in $H(q+2,q)$. \end{coroll} \begin{coroll} Let $q=p^m$ be an odd prime power, and let $C$ be an extended $1$-perfect code in $H(n,q)$. Then $\displaystyle{n=\frac{q^{l}+q-2}{q-1}}$ for some odd $l$. \end{coroll} \begin{predln}\label{p:remaincases} There are no extended $1$-perfect codes in $D(m,n)$, where $2m+n=22$. \end{predln} \begin{proof} Let $C$ be an extended $1$-perfect code in $D(m,n)$, where $2m+n=22$, and let $f$ be the distance coloring with respect to $C$. Consider an arbitrary vertex $a$ of color $2$. Denote by $W^i_j$ the set of vertices of color $i$ that are at the distance $j$ from $a$ and denote $W_j=W^0_j \cup W^1_j \cup W^2_j$. By Theorem~\ref{t:extiscomp} we have $|W^0_1|=0$, $|W^1_1|=22$, and $|W^2_1|=44$. As in proof of Proposition~\ref{p:odddiam}, we have $2|W^0_2|=|W^1_1|$, so $|W^0_2|=11$. Let us count the number $w$ of edges $(x,y)$ such that $x \in W_1$ and $y \in W^1_2$. This number is equal to $(22 \cdot 2 + 44 \cdot 22 - 2t-r)$, where $t$ is the number of $(1,1)$-edges and $r$ is the number of $(1,2)$-edges in the induced subgraph on the set of vertices $W_1$. It follows from the intersection array that this subgraph is $2$-regular, and hence $2t+r=2|W^1_1|=44$. So $w=22 \cdot 2+44 \cdot 22 - 44=968$. On the other hand, $w=2|W^1_2|$, so $|W^1_2|=484$. Let us count the number of $(0,1)$-edges that are incident to some vertex from $W^1_2$. This number is equal to $|W^1_2|=484$; on the other hand, it is equal to $6|W^0_2|+3|W^0_3|=66+3|W^0_3|$. We find that $3|W^0_3|=418$. Since $|W^0_3|$ is integer, we have a contradiction. \end{proof} Remind that formally the singleton from any vertex in $H(2,3)$, $D(0,2)$ or $D(1,0)$ is an extended $1$-perfect code, called trivial. Also all extended $1$-perfect codes in $D(m,n)$, where $2m+n=6$, are characterized in \cite{Alderson:MDS4, BesKro:mdsdoob}. From Propositions~\ref{p:extnonexist}, \ref{p:odddiam}, and \ref{p:remaincases} we have the following statement. \begin{theoreman}\label{t:parameters} \begin{enumerate} \item An extended $1$-perfect code in $H(n,3)$ exists if and only if $n=2$. \item An extended $1$-perfect code in $D(m,n)$ (including the case $D(0,n)=H(n,4)$) exists if and only if $(m,n)=(0,2)$, or $(m,n)=(1,0)$, or $(m,n)=(0,6)$, or $(m,n)=(2,2)$. \item For any $q$, there are no extended $1$-perfect codes in $H(n,q)$ if $n$ is odd. \end{enumerate} \end{theoreman} \section*{Acknowledgements} The author is grateful to Denis Krotov, Vladimir Potapov, and Ev Sotnikova for helpful remarks and introducing him to some background. \bibliographystyle{unsrt}
1908.03390
\section{Introduction} In applied statistics, one is often faced with the need to combine different types of information to produce a single decision. For instance, in credibility theory, the weights that link a relevant but small dataset with a big but not-so-relevant dataset are looked for, or similarly in bioinformatics one may use a dataset containing different cell types in the estimation of a cell, and scale down their importance in various ways. The present paper is motivated by a specific problem in liability insurance. In that line of business, claim size data usually have a high percentage of censored observations, as policies take years, or even decades, to be finally settled. Due to the limited number of claims, one still would like to take into account available information about the open claims in the estimation of claim size distributions (see e.g.\ \cite{abt}). On the one hand, experts typically project the final amount of open claims, i.e.\ they propose \textit{incurred values}, or also \textit{ultimates} based on covariate information or other (objective or subjective) considerations which are not in the payment dataset that arrives at a statistician's table. On the other hand, statisticians have standard ways of dealing with censored observations, for instance the Peaks over Threshold method when one is interested in extremes, as well as the Hill estimator for heavy and Pareto-like tails. This research has started in \cite{beirlcens2007} and \cite{einmahl2008statistics} and has received more attention recently, see e.g. \cite{worms2014new}, \cite{ameraoui2016bayesian}, \cite{beirlant2018penalized}. However, in that line of extreme value methods expert information has not been incorporated. In \cite{abt}, incurred values were used to derive upper bounds for the open claims and survival analysis methods for interval censored data were implemented. See also \cite{lesaffre} for frequentist and Bayesian analysis of interval censored data.\\ One often faces the question of whether to conduct the analysis from the right-censored observations point of view, or whether to use the imputed ultimate (expert) values into the dataset and treat it as a fully-observed dataset. The latter is typically an easy (and cheap) solution. Figure \ref{description0} illustrates a possible situation of available data for motor third-party liability (MTPL) insurance claims of a direct insurance company operating in the EU, cf.\ Section \ref{MTPL_sect} for more details, where this data set will be studied. In what follows we are interested in developing a procedure that combines both approaches, without making any assumptions on the quality or method used to obtain the expert information. \begin{figure}[] \centering \includegraphics[width=10cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{Description0.jpeg} \caption{Motor third-party liability insurance: log-claims in vertical order of arrival, showing the paid amount for both open (red, circle) and closed (black, dot) claims, as well as ultimate values for the open claims (green, triangle).} \label{description0} \end{figure} To that end, we assume that for each censored observation (open claim), we have a tail parameter $\beta_i$ which reflects the belief of the expert on the heaviness of the tail of this particular (unsettled) observation. The typical situation may be that all the $\beta_i$ are equal or that there is an upper and a lower bound for all of them. However, we develop the theory for the general case, and we embed these indices into a statistical framework, where a single tail parameter is estimated for the entire dataset. At a philosophical level, the proposal of different $\beta_i$ is not an ill-posed problem, it rather shows a prior variability of a belief on the tail index. The difference with the Bayesian paradigm is the fact that we only make the assumption for censored observations, such that increasing sample sizes but constant censored proportions will keep the importance of the expert guesses constant, with respect to the rest of the data. In mathematical terms we will see that our approach will have a Bayesian interpretation when the prior distribution of the parameter depends on the sample through the censoring indicators. We propose a perturbation of the likelihood via an exponential factor and use the relative entropy between two densities as a dissimilarity measure. The resulting maximum perturbed-likelihood estimate has an explicit formula which resembles the Hill estimator adapted for censoring (cf. \cite{Hill}; \cite{beirlcens2007}), degenerates into it as the perturbation becomes small, and converges to the mean of the expert tail indices as the perturbation becomes large. Thus, in a similar way as in the prior specification for Bayesian estimation, if experts have additional information on the quality of their belief, the perturbation parameter can be tuned accordingly. However, we propose a method which does not assume such additional prior knowledge, apart from the original expert knowledge. Penalization is a prevalent idea which has gained popularity in the age of cheap computational power. The idea behind it is to impose beliefs on the statistical estimation which can yield a better estimation or an estimator which has more acceptable properties for the application at hand. It can help to impute a control or perturbation parameter which in turn helps to tailor estimators towards a certain degree of convenience. For instance, in a different statistical setting, the Lasso or Ridge regression imposes the belief or need of a data scientist to reduce the number of covariates included in a covariate-response analysis. In some cases this procedure helps to remove nuisance covariates, but in others it might be too aggressive and exclude truly informative variables. This bottleneck is specific to each application -- and even to each dataset -- which suggests that a fully automated procedure is not recommended. In the same vein, the effectiveness of the proposed method depends on the quality of the provided expert tail information, and this is something which is not always available or quantifiable. In any case, it is recommended that the tail inference is done side-by-side with the experts. We derive the asymptotic properties of the perturbed-likelihood estimator, and although the asymptotic mean square error (AMSE) is available, the parameter which minimizes it depends on subjective considerations. One such consideration is the strength of the belief in the expert information provided: if the belief is certain, the penalization parameter goes to infinity, which means that the data should be ignored and the expert information should be used instead, based on the AMSE criterion. To avoid assumptions which are not realistically available in practice, we instead suggest selecting the penalizing parameter in a convenient way, as the one which reduces the perturbed estimator to a simple sum of identifiable components. When the expert information is precise (degenerate at the true parameter) it holds that the penalization weight equal to 1 always leads to a lower AMSE, the latter also having a formula with a pleasant interpretation. When substituting this penalization parameter into the original formula, a very simple interpretation of the (inverse of the) estimator is available (Theorem \ref{lambdaone}): it is the combination of the Hill estimator and the expert information, the weights being the proportion of non-censored and censored data-points, respectively. Such a simple combination estimator is shown for a variety of common heavy-tailed distributions and for a variety of parameters to perform very well alongside competing methods, which need information to tune their own parameters. The remainder of the paper is structured as follows. In Section \ref{deriv}, for the exact Pareto distribution, we introduce the notation and perturbation that we will deal with, as well as deriving simple expressions for the maximum perturbed-likelihood estimators and showing how they bridge theory and practice in a smooth manner. In Section \ref{Bayes_sect} we establish a close link with Bayesian statistics. In Section \ref{evt} we extend the methodology to the case where the data only exhibit Pareto-type behaviour in the tail, and we derive the asymptotic distributional properties of the perturbed-likelihood estimator and unveil a simple combination formula. In this more general heavy-tailed case, we naturally deal with estimators that use only a fraction of upper order statistics, and introduce as benchmarks some recent Bayesian estimators that have been proposed for censored datasets. In Section \ref{MTPL_sect} we perform a simulation study and a real-life motor third-party liability insurance application. The latter data has been studied in the literature from both the expert information and the censored dataset viewpoints, but not yet in a joint manner, as we do here. We conclude in Section \ref{conclusion_sect}. \section{Derivation and properties}\label{deriv} Consider the estimation from a censored sample following an exact Pareto distribution. That is, we observe the randomly censored data-points and the binary indicator censoring variables: \begin{align*} (Z_1,\,\ee_1),\: (Z_2,\,\ee_2),\dots,(Z_n,\,\ee_n). \end{align*} Contrary to classical survival analysis, the $Z_i$ here correspond to payment sizes rather than times. The density and tail of the non-censored underlying data (which is not observed) are given by \begin{align}\label{exactpareto} f_\alpha(x)=\frac{\alpha x_0^\alpha}{x^{\alpha+1}},\quad \overline F_\alpha(x)=\frac{ x_0^\alpha}{x^{\alpha}}, \quad x\ge x_0>0, \end{align} with unknown tail parameter $\alpha$ and known scale parameter $x_0$. The latter assumption poses no restriction, since we are interested only in the estimation of the tail index, and as we will see, the Hill-type estimators based on upper order statistics that will be considered depend only on the log spacings of the data, which are independent of the scale parameter. Additionally, and in contrast to classical survival analysis, we assume that we are given for each (right-)censored data-point an expert information of the possible tail parameter, i.e. we have knowledge of $\beta_i>0$ for $i=1,\dots, n$. This can arise, for instance, when the data are collected from different sources, or the realization of a data-point showing some pattern due to a particular settlement history, or some covariate information that can not be included in a more direct way. However, it is believed that all data points eventually come from one underlying distribution (or at least one aims for such a modelling description). We are also primarily interested in the case where all the $\beta_i$ are the same, since more often than not, expert information can come in this format. When ignoring the information from the data, natural estimates of $\alpha$ are given by the weighted arithmetic and harmonic mean \begin{align}\label{expavg} \hat\alpha_{\text{am}}=\frac{\sum_{i=1}^n (1-\ee_i)\beta_i}{\sum_{i=1}^n (1-\ee_i)},\quad \hat\alpha_{\text{hm}}=\frac{\sum_{i=1}^n (1-\ee_i)}{\sum_{i=1}^n (1-\ee_i)/\beta_i}, \end{align} respectively, where $\ee_i=0$ if $Z_i$ is right-censored, and $\ee_i=1$ otherwise. On the other hand, in the context of survival analysis it is a standard approach to maximize the following likelihood based purely on the data \begin{align*} \mathcal{L}(\alpha;z)=\prod_{i=1}^n f_{\alpha}(z_i)^{\ee_i} \overline F_\alpha(z_i)^{1-\ee_i}=\prod_{i=1}^n \left(\frac{\alpha x_0^\alpha}{z_i^{\alpha+1}}\right)^{\ee_i} \left(\frac{ x_0^\alpha}{z_i^{\alpha}}\right)^{1-\ee_i}. \end{align*} The maximum likelihood estimator is then given by \begin{align}\label{hill} \hat\alpha^{MLE}=\frac{\sum_{i=1}^n\ee_i}{\sum_{i=1}^n\log\left(\frac{Z_i}{x_0}\right)}, \end{align} which is an adaptation (see \cite{beirlcens2007}) to the censoring case of the famous Hill estimator (cf. \cite{Hill}) from extreme value theory obtained by Peaks-over-Threshold modelling; see \cite{embrechts2013modelling} or \cite{BGST2004} for a broader treatment on Pareto-type tail estimation, see also Section 4. The two aforementioned approaches to the estimation of the tail parameter are in practice separated. That is, an expert will take only one of the two approaches, based on factors such as reliability of the expert information or data availability. This is an especially difficult decision when there is a high percentage of censored and large observations, which present a key problem in the estimation in statistics in general, see for instance \cite{leung1997censoring}. In the present paper, we introduce an estimator which bridges the previous estimators. We will proceed by the perturbation of the likelihood function, and consider a penalized likelihood: \begin{align*} \mathcal{L}^P(\alpha;z)&=\prod_{i=1}^n f_{\alpha}(z_i)^{\ee_i} \overline F_\alpha(z_i)^{1-\ee_i}e^{-(1-\ee_i)\lambda D(\alpha,\beta_i)}\\&=\prod_{i=1}^n \left(\frac{\alpha x_0^\alpha}{z_i^{\alpha+1}}\right)^{\ee_i} \left(\frac{ x_0^\alpha}{z_i^{\alpha}}\,e^{-\lambda D(\alpha,\beta_i)}\right)^{1-\ee_i}, \end{align*} where the factor $e^{-\lambda D(\alpha,\beta_i)}$ penalizes the contribution of the censored observations according to some measure of dissimilarity between $f_\alpha$ and the Pareto distribution with parameter $\beta_i$, denoted by $D(\alpha,\beta_i)$, and the $\lambda\ge0$ models the strength of the penalization imposed by $D(\alpha,\beta_i)$. We propose to use the relative entropy as a dissimilarity measure: \begin{align}\label{entropypenaliz} D(\beta_i,\alpha)=\int_{x_0}^\infty\log\left(\frac{g_i(s)}{f_\alpha(s)}\right)g_i(s)\dd s=\frac{\alpha}{\beta_i} -1- \log\left(\frac{\alpha}{\beta_i}\right)\ge 0, \end{align} where $g_i$ is a Pareto density with tail index $\beta_i$ and scale parameter $x_0$. The associated log-likelihood is then given by \begin{align}\label{exppen} \log \left(\mathcal{L}^P(\alpha,z)\right)=\sum_{i=1}^n \ee_i\log\left(\frac{\alpha x_0^\alpha}{z_i^{\alpha+1}}\right)+\sum_{i=1}^n (1-\ee_i)\log\left(\frac{ x_0^\alpha}{z_i^{\alpha}}\right)-\sum_{i=1}^n\lambda (1-\ee_i)D(f_\alpha,g_i). \end{align} Equation \eqref{exppen} turns out to have an explicit minimizer when using $D$ from \eqref{entropypenaliz} (we omit the details), given by \begin{align*} \hat\alpha^{P}(\lambda)=\frac{\sum_{i=1}^n(\ee_i+\lambda(1-\ee_i))}{\sum_{i=1}^n(\log(Z_i/x_0)+\lambda(1-\ee_i)/\beta_i)}. \end{align*} Notice that if we flip the densities in the entropy penalization and consider instead \begin{align*} D(\alpha,\beta_i)=\int_{x_0}^\infty\log\left(\frac{f_\alpha(s)}{g_i(s)}\right)f_\alpha(s)\dd s=\frac{\beta_i}{\alpha} -1- \log\left(\frac{\beta_i}{\alpha}\right)\ge 0, \end{align*} the associated penalized likelihood has the explicit solution \begin{align}\label{expenalpha} &\hat\alpha^{I}(\lambda)=\\ &\frac{ \sum_{1}^n(\ee_i-\lambda(1-\ee_i))+\sqrt{ \left[\sum_{1}^n(\ee_i-\lambda(1-\ee_i)\right]^2+4\sum_{1}^n\log\left(\frac{Z_i}{x_0}\right)\cdot\sum_{1}^n\beta_i\lambda(1-\ee_i)} }{2 \sum_{1}^n\log\left(\frac{Z_i}{x_0}\right)},\nonumber \end{align} which is less appealing, with more complicated asymptotic properties. \begin{remark}\normalfont The particular choice of entropy penalization is mathematical in nature, since the resulting explicit and simple form of the maximum likelihood estimator permits a deeper analysis than other choices. For instance, the significantly more complicated explicit estimators \eqref{expenalpha} or \eqref{gaussestim} lead to a much more involved analysis. \end{remark} \begin{remark}\normalfont In general, with lack of any other type of information, giving equal weight to each censored observation is the most natural way to deal with them. If the expert has an idea of the importance of each data point and their corresponding tail indices $\beta_i$, the selection can be done on a single parameter $\lambda$ through \begin{align*} \lambda_i=\lambda \omega_i. \end{align*} Note that then \begin{align}\label{l2} \lim_{\lambda\to \infty} \hat\alpha^{P}(\lambda)=\frac{\sum_{i=1}^n(1-\ee_i)\omega_i}{\sum_{i=1}^n(1-\ee_i)\omega_i/\beta_i}, \end{align} and \begin{align}\label{l1} \lim_{\lambda\to \infty} \hat\alpha^{I}(\lambda)=\frac{\sum_{i=1}^n(1-\ee_i)\omega_i\beta_i}{\sum_{i=1}^n(1-\ee_i)\omega_i}, \end{align} i.e.\ the information brought by the data becomes irrelevant and we take a weighted average of the expert guesses. Taking uniform weights, i.e., giving equal importance to each censored observation, will result in \eqref{expavg}. If no weights are naturally suggested one can always tackle the multi-dimensional selection problem on all $\lambda_i$. In this more general case we have that \begin{align}\label{l3} \lim_{\lambda_i\to 0;\:i=1,\dots,n} \hat\alpha^{P}(\lambda_1,\dots,\lambda_n)=\frac{\sum_{i=1}^n\ee_i}{\sum_{i=1}^n\log\left(\frac{Z_i}{x_0}\right)}. \end{align} which can readily be deduced directly from \eqref{exppen}, since it is the classical non-penalized estimator. Similarly \begin{align}\label{l4} \lim_{\lambda_i\to 0;\:i=1,\dots,n} \hat\alpha^{I}(\lambda_1,\dots,\lambda_n)=\frac{\sum_{i=1}^n\ee_i}{\sum_{i=1}^n\log\left(\frac{Z_i}{x_0}\right)}. \end{align} As a consequence of the limits \eqref{l2}, \eqref{l1}, \eqref{l3} and \eqref{l4}, we readily get for $\lambda_1=\cdots=\lambda_n=:\lambda\ge 0$ that \begin{align*} \lim_{\lambda\to \infty}\hat\alpha^{P}(\lambda)= \hat\alpha_1,\quad\lim_{\lambda\to \infty} \hat\alpha^{I}(\lambda)= \hat\alpha_2,\quad \lim_{\lambda\to 0}\hat\alpha^{P} =\lim_{\lambda\to 0}\hat\alpha^{I}(\lambda)= \hat\alpha^{MLE}, \end{align*} which confirms that the estimator bridges the estimation of $\alpha$ and the proposal of the $\beta_i$, and that the parameter $\lambda$ reflects in some sense the strength of the belief on the expert information. The next section will touch upon this interpretation in a more precise manner. \end{remark} \section{Penalization seen as a Bayesian prior}\label{Bayes_sect} We will use a single $\lambda$ value in practice, but here we assume the most general setting where the $\lambda_i$ could be different, at no complexity cost. The penalized likelihood that gives rise to $\hat\alpha^{P}$ is given by \begin{align*} \mathcal{L}^P(\alpha;z)&=\prod_{i=1}^n \left(\frac{\alpha x_0^\alpha}{z_i^{\alpha+1}}\right)^{\ee_i} \left(\frac{ x_0^\alpha}{z_i^{\alpha}}\right)^{1-\ee_i}e^{-\lambda_i(1-\ee_i) (\alpha/\beta_i-1-\log(\alpha/\beta_i))}\\ &=\left[\prod_{i=1}^n \left(\frac{\alpha x_0^\alpha}{z_i^{\alpha+1}}\right)^{\ee_i} \left(\frac{ x_0^\alpha}{z_i^{\alpha}}\right)^{1-\ee_i}\right]\cdot \left[\alpha^{\sum_{i=1}^n\lambda_i(1-\ee_i)}e^{-\alpha \sum_{i=1}^n\lambda_i(1-\ee_i)/\beta_i}\right]\\ &\quad\times\left[\prod_{i=1}^n\beta_i^{-\lambda_i(1-\ee_i)}e^{\lambda_i(1-\ee_i)}\right]\\ &=\left[\alpha^{\sum_{i=1}^n(\ee_i+\lambda_i(1-\ee_i))}e^{-\alpha\sum_{i=1}^n(\lambda_i(1-\ee_i)/\beta_i+\log(z_i/x_0))}\right]\\ &\quad \times\left[\prod_{i=1}^n\beta_i^{-\lambda_i(1-\ee_i)}e^{\lambda_i(1-\ee_i)}z_i^{-\ee_i}\right]. \end{align*} \noindent Note that the second factor after the last equality sign does not depend on $\alpha$, and the first one is proportional to a gamma density. We thus recognize that the penalized maximum likelihood estimator can be seen as the posterior mode arising from a Pareto likelihood and the conjugate gamma prior with hyper-parameters \begin{align*} \alpha_0=\sum_{i=1}^n\lambda_i(1-\ee_i)+1,\quad \beta_0=\sum_{i=1}^n\lambda_i(1-\ee_i)/\beta_i, \end{align*} and corresponding posterior parameters \begin{align*} \alpha^\ast=\sum_{i=1}^n(\ee_i+\lambda_i(1-\ee_i))+1,\quad \beta^\ast=\sum_{i=1}^n(\lambda_i(1-\ee_i)/\beta_i+\log(z_i/x_0)). \end{align*} The hyper-parameters of the prior, however, do depend on the sample, namely on the censoring indicators $\ee_i$, so we are not in the classical Bayesian setting. Nonetheless, we will continue to call it a prior, for simplicity. In this context we also have the following interpretation of the effects of the selection of the $\lambda_i$. The mode of the prior distribution is given by \begin{align}\label{priormodevar} \frac{\sum_{j=1}^n\lambda_j(1-\ee_j)}{\sum_{i=1}^n\lambda_i(1-\ee_i)/\beta_i}=\left(\sum_{i=1}^n\frac{\lambda_i(1-\ee_i)}{\sum_{j=1}^n\lambda_j(1-\ee_j)}\beta_i^{-1}\right)^{-1}, \end{align} and one sees that the proportions of the $\lambda_i$ give the weights which will determine this mode. However, we can multiplicatively scale these $\lambda_i$ and the mode will remain unchanged. The magnitude of the $\lambda_i$, in contrast, does play a role for the variance of the prior: \begin{align} \frac{\sum_{i=1}^n\lambda_i(1-\ee_i)+1}{\left(\sum_{i=1}^n\lambda_i(1-\ee_i)/\beta_i\right)^2}, \end{align} since the larger the $\lambda_i$, the smaller the prior variance. Thus, a single estimate as an expert information will trump the ability to effectively determine the magnitude of the penalization parameter. This is a problem which is often encountered in Bayesian statistics, and a prior is often selected nonetheless, making frequentists doubtful of this philosophical leap of faith. Note that the gamma distribution has two parameters, and any two descriptive statistics (presently we used the mode and variance) which bijectively map to the mode and variance can be used to give alternate full explanations as to how the proportions $\lambda_i(1-\ee_i)/\sum_{j=1}^n\lambda_j(1-\ee_j)$ and the sizes of the $\lambda_i$ play a role in the modification of the prior distribution, and hence on the expert belief. \begin{remark}\normalfont If instead of using the penalization given by \eqref{entropypenaliz} we simply use squared penalization given by \begin{align*} D(\beta_i,\alpha)=\frac{(\alpha-\beta_i)^2}{2}\ge 0, \end{align*} then the maximum perturbed-likelihood estimate will again be explicit and given by \begin{align}\label{gaussestim} \hat\alpha^{Sq}=&\frac{\sum_{i=1}^n(\lambda_i(1-\ee_i)\beta_i-\log(Z_i/x_0))}{\sum_{i=1}^n\lambda_i(1-\ee_i)}\nonumber\\ &+\frac{\sqrt{ \left[\sum_{i=1}^n(\lambda_i(1-\ee_i)\beta_i-\log(Z_i/x_0)) \right]^2+4\sum_{i=1} ^n \lambda_i(1-\ee_i)\cdot \sum_{i=1}^n\ee_i}}{\sum_{i=1}^n\lambda_i(1-\ee_i)}, \end{align} which naturally leads to a Gaussian prior interpretation when the $\lambda_i$ are equal. This estimator also converges to the Hill estimator as $\lambda_i\to0$, $i=1,\dots, n$, but it can have numerical instabilities when the denominator becomes very small. \end{remark} \section{Extreme Value Theory}\label{evt} We now move on to a more general heavy-tail approach and consider the case of regularly varying distributions with tail of the form \begin{align*} x^{-\alpha}\ell(x), \quad \alpha>0, \end{align*} where $\ell$ is a slowly varying function, i.e. ${\ell (vx) \over \ell (x)} \to 1$, as $x \to \infty$ for every $v>1$. We also assume now that censoring is done at random and the data is generated as the minimum of two independent random variables \begin{align*} Z_i=\min\{X_i,L_i\}, \end{align*} with regularly varying tails: \begin{align*} &\Prob(X_i>u)=u^{-\alpha}\ell(u),\\ &\Prob(L_i>u)=u^{-\alpha_2}\ell_2(u). \end{align*} It follows that \begin{align}\label{randcens} \Prob(Z_i>u)=u^{-\alpha_c}\ell_c(u), \quad \alpha_c=\alpha+\alpha_2, \end{align} and slowly varying function $\ell_c=\ell\, \ell_2$. Here we confine ourselves to the so-called \textit{Hall class} (cf.\ \cite{hall82}). This popular second-order assumption in extreme value theory often makes asymptotic identities tractable: \begin{eqnarray}\label{hallclass} \Prob(X_i>u) &=& C_1 u^{-\alpha}\left( 1+D_1 u^{-\nu_1}(1+o(1))\right) \mbox{ for } u \to \infty, \nonumber \\ \Prob(L_i >u) &=& C_2 u^{-\alpha_2}\left( 1+D_2 u^{-\nu_2}(1+o(1))\right) \mbox{ for } u \to \infty, \label{HW} \end{eqnarray} where $\nu_1,\nu_2, C_1, C_2$ are positive constants and $D_1,D_2$ real constants. Then, with \begin{align*} C=C_1C_2,\quad \nu_*=\mbox{min} (\nu_1,\nu_2) \end{align*} and \begin{align*} D_* =\begin{cases} D_1 , &\nu_1 < \nu_2\\ D_2, & \nu_2 < \nu_1\\ D_1+D_2,& \nu_1 = \nu_2, \end{cases} \end{align*} we have that \[ \Prob(Z_i>u) = Cu^{-\alpha_c} \left(1+D_* u^{-\nu_*}(1+o(1))\right), \] that is, the censored dataset is again in the Hall class. Denote the quantile function of $Z$ by $Q$ and consider the tail quantile function $U(x)=Q(1-x^{-1})$, $x>1$. Then we have that \[ U(x) = C^{1/\alpha_c}\left( 1+ {D_* \over \alpha_c}C^{-\nu_*/\alpha_c}x^{-\nu_*/\alpha_c}(1+o(1))\right). \] The order statistics of the data will be denoted by \[ Z^{(1)}\ge \cdots \ge Z^{(n)}, \] with associated censoring indicators $\ee^{(i)}$ and expert information $\beta^{(i)}$. Given a high threshold $u>x_0$, the Hill estimator adapted for censoring is \begin{align}\label{hillestimdef} \hat\alpha^H_u=\frac{\sum_{i=1}^n\ee_i1\{Z_i>u\}}{\sum_{i=1}^n\log\left(\frac{Z_i}u\right)1\{Z_i>u\}}. \end{align} Taking $Z^{(k)}$ for some $1\le k\le n$, as a (random) threshold $u$, we obtain the alternative order statistics version \begin{align}\label{mlealpha} \hat\alpha^{MLE}_k=\frac{\sum_{i=1}^k\ee^{(i)}}{\sum_{i=1}^k\log\left(\frac{Z^{(i)}}{Z^{(k+1)}}\right)} = {\hat{p}_k\over H_k}, \end{align} where $$\hat{p}_k = {1 \over k}\sum_{i=1}^k \ee^{(i)}$$ is the proportion of non-censored observations in the largest $k$ observations of $Z$, and $$H_k = {1 \over k}\sum_{i=1}^k\log\left(\frac{Z^{(i)}}{Z^{(k+1)}}\right)$$ is the classical Hill estimator based on the largest $k$ observations. For details on these censored versions of the Hill estimator, we refer to \cite[Sec.2]{einmahl2008statistics}. The asymptotic distribution of $H_k$ has been studied intensively in the literature under the above second-order assumptions (see for instance \cite[Ch.4]{BGST2004}): assuming \begin{align}\label{todelta} \sqrt{k}(k/n)^{\nu_*/\alpha_c} \to \delta\ge0, \end{align} as $k,n \to \infty$ with $k/n \to 0$, we have that \begin{align}\label{y0} \sqrt{k}\left( H_k -{1 \over \alpha_c}\right) \stackrel{d}{\to} Y_0 \sim \mathcal{N}\left( -C^{-\nu_*/\alpha_c}D_*{\nu_*\delta\over \alpha_c (\alpha_c+ \nu_*)}, \alpha_c^{-2}\right). \end{align} As discussed in \cite{einmahl2008statistics}, the asymptotic bias of $\hat{p}_k$ follows from the leading term in ${1 \over k}\sum_{i=1}^k p(U(n/i)) -p$, where \[ p(z)= \mathbb{P} \left(e =1 |Z=z \right), \] and $p$ denotes the asymptotic probability of non-censoring \begin{align*} p=\lim_{z \to \infty}p(z) =\frac{1/\alpha_2}{1/\alpha+1/\alpha_2}=\frac{\alpha}{\alpha+\alpha_2}. \end{align*} Under the Hall class \eqref{HW}, we have with the definition \[ (D/\alpha)_* =\begin{cases} D_1/\alpha , &\nu_1 < \nu_2\\ -D_2/\alpha_2, & \nu_2 < \nu_1\\ D_1/\alpha-D_2/\alpha_2,& \nu_1 = \nu_2, \end{cases} \] that as $x \to \infty$ \begin{equation} p(U(x)) -p = p(1-p) (D/\alpha)_* \nu_* C^{-\nu_*/\alpha_c}x^{-\nu_*/\alpha_c} (1+o(1)). \label{biaspU} \end{equation} From this, assuming that $\sqrt{k}(k/n)^{\nu_*/\alpha_c} \to \delta$ as $k,n \to \infty$ with $k/n \to 0$, one gets \[ \sqrt{k}\left( \hat{p}_k -p \right) \stackrel{d}{\to} \mathcal{N}\left( p(1-p)C^{-\nu_*/\alpha_c} (D/\alpha)_* {\alpha_c \nu_*\delta \over \alpha_c + \nu_*}, p(1-p)\right). \] In \cite{einmahl2008statistics} it was also derived that asymptotically $H_k$ and $\hat{p}_k$ are independent, so that under the condition \eqref{todelta} as $k,n \to \infty$ with $k/n \to 0$, \begin{align}\label{noncomb} \sqrt{k}\left(\frac{1}{\hat\alpha^{MLE}_k}-\frac{1}{\alpha}\right) &\stackrel{d}{\to} \mathcal{N}\left(-{\delta\nu_* \over \alpha_c +\nu_*} C^{-\nu_*/\alpha_c}[D_* (\alpha_c^{-1}+\alpha^{-1})+ {\alpha_2 \over \alpha}(D/\alpha)_*], \frac{1}{p\alpha^2} \right). \end{align} \vspace{0.3cm} In the same manner we can define a version of $\hat\alpha^{P}$ which perturbs at censored data-points and which considers only large claims. We consider as before that $\lambda_i=\lambda$, and, in analogy to the exact Pareto setting, define the two estimators \begin{align*} \hat\alpha^{P}_u=\frac{\sum_{i=1}^n(\ee_i+\lambda(1-\ee_i))1\{Z_i>u\}}{\sum_{i=1}^n(\log(Z_i/u)+\lambda(1-\ee_i)/\beta_i)1\{Z_i>u\}}, \end{align*} and the order statistics version \begin{align*} \hat\alpha^{P}_k &=\frac{\sum_{i=1}^k(\ee^{(i)}+\lambda (1-\ee^{(i)}))}{\sum_{i=1}^k\left(\log\left(\frac{Z^{(i)}}{Z^{(k)}}\right)+\lambda (1-\ee^{(i)})/\beta^{(i)}\right)}. \end{align*} \begin{theorem}\label{biasvar} Assume \eqref{HW}. Set $\lambda_i=\lambda\ge 0$, $\beta_i=\beta>0$. As $\sqrt{k}(k/n)^{\nu_*/\alpha_c} \to \delta$, as $k,n \to \infty$ with $k/n \to 0$, \[ \sqrt{k} \left( {1\over \hat\alpha^{P}_k} - \frac{\lambda\alpha_2/\beta+1}{\lambda \alpha_2+\alpha}\right) \] is asymptotically normal with asymptotic mean \begin{eqnarray} \mathcal{M}&=&- {\delta \nu_* C^{-\nu_*/\alpha_c}\over 1-r_1} \left(\frac{D_*/\alpha_c + \lambda p(1-p) (D/\alpha)_* \alpha_c/\beta}{\nu_* +\alpha_c}\right. \nonumber\\ && \left. \hspace{2.8cm}+ {\lambda r_2 + \alpha_c^{-1} \over 1-r_1}p(1-p) (D/\alpha)_*\left( {\alpha_c \over \alpha_c+\nu_*}\right) \right) \nonumber \end{eqnarray} \noindent and variance \begin{align}\label{variance} \mathcal{V}=\frac{1}{\alpha_c^2(1-r_1)^2}+\frac{1}{(1-r_1)^4}\left(\frac{\lambda}{\beta(1-\lambda)}+\frac{1}{\alpha_c}\right)^2(1-\lambda)^2p(1-p), \end{align} where $ r_1=(1-p)(1-\lambda)$ and $r_2=(1-p)/\beta$. The asymptotic bias of $1/\hat{\alpha}^P_k$ equals \begin{align}\label{bias} \mathcal{B}=\frac{\lambda \alpha_2/\beta+1}{\lambda \alpha_2+\alpha}-{1 \over \alpha}+O\left((k/n)^{\nu_*/\alpha_c}\right) \end{align} as $k,n \to \infty$ and $k/n \to 0$. \end{theorem} \begin{proof} See Appendix A. \end{proof} \begin{remark}\normalfont Notice that estimates of $\alpha_2$ or $\alpha_c$ are available using basic survival analysis techniques, cf. \eqref{randcens}. Consequently, we can use the plug-in method for the estimation of any of the above formulas that involve these quantities. \end{remark} \begin{remark}\normalfont As a sanity check, observe that in Theorem \ref{biasvar}, whenever $\beta=\alpha$ and $\delta=0$, the bias vanishes. \end{remark} \vspace{0.2cm}\noindent In the same spirit, even more can be said: \begin{corollary}\label{lambdaone} (Combination) Assume the conditions of Theorem \ref{biasvar}, and further $\delta=0$, with $\beta=\alpha$. Then the estimator $\hat\alpha^{P}_k$ with $\lambda=1$ is unbiased and can be written as \begin{align*} \hat\alpha^{P}_k=\left(\frac{\sum_{i=1}^k \ee^{(i)}}{k}\cdot \frac{\sum_{i=1}^k\log(Z^{(i)}/Z^{(k+1)})}{\sum_{i=1}^k \ee^{(i)}}+\frac{\sum_{i=1}^k(1-\ee^{(i)})}{k}\cdot \beta^{-1}\right)^{-1}. \end{align*} In words, $1/\hat\alpha^{P}_k$ is the weighted average of the MLE estimator and the expert information, the weights being the proportion of non-censored (and censored, respectively) observations above the threshold $T^{(k)}$. Moreover, its inverse has asymptotic variance (and hence mean square error) given by \begin{align*} \operatorname{Var}(1/\hat\alpha_k^P)=\frac{1}{k p(\alpha+\alpha_2)^2}, \end{align*} which, when compared to \eqref{noncomb}, is seen to enhance estimation. \end{corollary} The proof of Corollary \ref{lambdaone} is immediate. \begin{remark}\normalfont Observe that in a Bayesian setting, whenever we are aware that a parameter lies within an interval, a natural estimator is constructed as follows. We set a uniform prior on $[b_1,b_2]$ and together with the Pareto likelihood we use the posterior mean as an estimate. Such a mean is given by \begin{align*} &\frac{\int_{b_1}^{b_2}\alpha^{1+\sum_{i=1}^n\ee_i}e^{-\alpha\sum_{i=1}^n\log\left(\frac{Z_1}{x_0}\right)} \dd\alpha}{\int_{b_1}^{b_2}\alpha^{\sum_{i=1}^n\ee_i}e^{-\alpha\sum_{i=1}^n\log\left(\frac{Z_1}{x_0}\right)} \dd\alpha}\\ =&\frac{\sum_{i=1}^n\ee_i+1}{\sum_{i=1}^n\log \left(\frac{Z_1}{x_0}\right)}\nonumber\\ &\times\left[\frac{\gamma(\sum_{i=1}^n\ee_i+2,b_2\sum_{i=1}^n\log\left(\frac{Z_1}{x_0}\right))-\gamma(\sum_{i=1}^n\ee_i+2,b_1\sum_{i=1}^n\log\left(\frac{Z_1}{x_0}\right))}{\gamma(\sum_{i=1}^n\ee_i+1,b_2\sum_{i=1}^n\log\left(\frac{Z_1}{x_0}\right))-\gamma(\sum_{i=1}^n\ee_i+1,b_1\sum_{i=1}^n\log\left(\frac{Z_1}{x_0}\right))}\right]\nonumber, \end{align*} where \begin{align*} \gamma(u,v)=\frac{\int_0^vt^{u-1}e^{-t}\dd t}{\Gamma(u)} \end{align*} is the (normalized) lower incomplete gamma function. One can go one step further and define the order statistics version of the above estimator. However, despite being theoretically neat, the latter estimator is numerically unstable for both large ($k>100$) and small ($k<5$) number of upper order statistics, and hence we will not pursue it in the simulation section. \end{remark} \begin{remark}\normalfont In \cite{ameraoui2016bayesian}, several Bayesian approaches for heavy tail estimation were considered (see also \cite{beirlant2018penalized}) under the random censoring assumption. We will use two of them as a benchmark. The first one arises from the posterior mean of a Pareto likelihood and the conjugate Gamma($a,b$) prior: \begin{align}\label{bayesgamma} \hat\alpha^{BG}=\frac{a+\sum_{i=1}^k \ee^{(i)}}{b+\sum_{i=1}^k\log(Z^{(i)}/Z^{(k+1)})}. \end{align} In the presence of a single expert estimate $\beta$ of the tail index, the prior parameters can be tuned by moment matching, where the variance will need to be imposed subjectively. That is, solve $\beta=a/b$ and $\sigma^2=a/b^2$ for an expert opinion on $\sigma^2$. The second one arises from the maximal data information prior, and leads to the estimator \begin{align}\label{bayesmaximal} \hat\alpha^{BM}=\frac{1+\sum_{i=1}^k \ee^{(i)}+\sqrt{(1+\sum_{i=1}^k \ee^{(i)})^2+4\sum_{i=1}^k\log(Z^{(i)}/Z^{(k+1)})}}{2 \sum_{i=1}^k\log(Z^{(i)}/Z^{(k+1)})}. \end{align} Notice that the latter does not admit tuning the prior to additional data. \end{remark} \subsection*{Quantile estimation} With the last result at hand it is natural to propose a quantile estimator based on the approach taken in \cite{weissman1978estimation}. Recall that we denote the quantile function of a regularly varying tail by $Q(p)$. Exploiting the fact that \begin{align}\label{weismannasympt} \frac{Q(1-p)}{Q(1-k/n)}\sim\frac{p^{-1/\alpha}}{(k/n)^{-1/\alpha}}=\left(\frac{k}{np}\right)^{1/\alpha}, \quad p\downarrow 0, \:k/n \to 0,\: np=o(k), \end{align} the Weissman estimator based on $k$ order statistics (and without expert information) arises naturally as \begin{align} \hat Q^{MLE}_k(1-p)=\hat Q^{KM}(1-k/n)\cdot\left(\frac{k}{np}\right)^{1/\hat\alpha^{MLE}_k}, \end{align} where $\hat Q^{KM}$ is the quantile function derived from the Kaplan-Meier estimator $$\widehat {S}(z)=\prod \limits _{i:\ Z_{i}\leq z}\left(1-{\frac {d_{i}}{n_{i}}}\right),$$ for the survival curve of the censored dataset in question, $(Z_i,\ee_i)$, $i=1,\dots,n$, where the $Z_i$ are payments (which would correspond to times in classical survival analysis terminology). Here, $d_i$ is the number of closed claims of a given size $z$, and $n_i$ is the number of payments, which irrespectively of censoring, are above $z$. In the case of no censoring this reduces to the empirical quantiles of the dataset, since the Kaplan-Meier curve is then just the empirical distribution function. Similarly, in the case of pure expert information an estimator can be proposed as \begin{align} \hat Q^{EX}_k(1-p)=\hat Q^{EX}(1-k/n)\cdot\left(\frac{k}{np}\right)^{1/\beta}, \end{align} where $\hat Q^{EX}$ is either an expert-given cumulative distribution function, or in absence of it, simply the Kaplan-Meier quantiles. To combine these two results, Theorem \ref{lambdaone} leads the way. For the choice $\lambda=1$, we see that the Pareto part of the tail splits for the perturbed estimator according to \begin{align*} \left(\frac{k}{np}\right)^{1/\hat\alpha^P}=\left(\frac{k}{np}\right)^{\hat p_k/\hat\alpha^{MLE}_k}\cdot\left(\frac{k}{np}\right)^{(1-\hat p_k)/\beta}, \end{align*} where \begin{align*} \hat p_k=\frac1k \sum_{i=1}^k \ee^{(i)}, \end{align*} and hence the following estimator is proposed for the overall tail \begin{align}\label{perturbed_quantile} \hat Q^{P}_k(1-p)&=\left[\hat Q^{KM}(1-k/n)\cdot\left(\frac{k}{np}\right)^{1/\hat\alpha^{MLE}_k}\right]^{\hat p_k}\cdot \left[\hat Q^{EX}(1-k/n)\cdot\left(\frac{k}{np}\right)^{1/\beta}\right]^{1-\hat p_k}\\ &=\hat Q^{P}(1-k/n)\cdot\left(\frac{k}{np}\right)^{1/\hat\alpha^P_k}, \end{align} where \begin{align*} \hat Q^{P}(1-k/n)=(\hat Q^{KM}(1-k/n))^{\hat p_k}(\hat Q^{EX}(1-k/n))^{1-\hat p_k}. \end{align*} Observe that in the absence of expert information for the quantile function, we merely have \begin{align*} \hat Q^{P}_k(1-p)=\hat Q^{KM}(1-k/n)\cdot\left(\frac{k}{np}\right)^{1/\hat\alpha^P_k}. \end{align*} \section{Simulation Study and MTPL Insurance}\label{MTPL_sect} We perform in this section a simulation study and apply our method to a motor third party liability insurance dataset (cf. \cite[Sec.1.3.1]{abt}). In order to make our results comparable with existing studies and existing analysis of the aforementioned dataset, we will consider estimation of $$\xi=\frac 1\alpha,$$ and thus we will make use of the estimators \begin{align}\label{xiestims} \hat\xi_k^{MLE}=\frac{1}{\hat\alpha^{MLE}_k},\:\: \hat\xi_k^P=\frac{1}{\hat\alpha^{P}_k},\:\: \hat\xi_k^{BG}=\frac{1}{\hat\alpha^{BG}_k},\:\: \hat\xi_k^{BM}=\frac{1}{\hat\alpha^{BM}_k}. \end{align} \subsection{Simulation Study} We consider three heavy tails belonging to the Hall class \eqref{hallclass}, and compare $\hat\xi_k$ and the quantile estimator \begin{align*} \hat Q^{P}_k(1-p)=\hat Q^{KM}(1-k/n)\cdot\left(\frac{k}{np}\right)^{\hat\xi_k}, \end{align*} where $\hat\xi_k$ is one of the four estimators in \eqref{xiestims}, and for $p=0.005$. For any tail estimator $\hat \xi_k$, we generically refer to $\hat Q^{P}_k$ as the corresponding Weismann estimator, since it was derived using the general principle of equation \eqref{weismannasympt}. \noindent Concretely, we simulate two independent i.i.d.\ samples of size $n=200$, corresponding to the variables $X_i$ and $L_i$, $i=1,,\dots,n$, in \eqref{hallclass}. We repeat the procedure $N_{{sim}}=1000$ times. The following three distributions are employed, with two sub-cases for each distribution, for varying parameters: \begin{itemize} \item The exact Pareto distribution, defined in \eqref{exactpareto}, for $\xi=1,\,1/2$.\\ \item The Burr distribution, with tail given by \begin{align*} \overline F(x)=\left(\frac{\eta}{\eta+x^\tau}\right)^\lambda, \;x>0,\quad \eta,\tau,\lambda>0, \end{align*} We consider $\eta=1$, $\lambda=2$, $\tau=1/2$; $\eta=2$, $\lambda=1$, $\tau=2$; and $\eta=2$. Notice that $\xi=1/(\lambda\tau)$\\ \item The Fr\'{e}chet distribution with tail \begin{align*} \overline F(x)=1-\exp(-x^{-\alpha}), \quad \alpha>0. \end{align*} We consider $\xi=1,\: 1/2$.\\ \end{itemize} \noindent For the expert information we draw a single random number from a Gaussian distribution centered at the true $\xi$ and with standard deviation $0.2$, and define that value as $1/\beta$. Then, by moment matching, using a variance of $0.04$, we obtain the parameters $a,b$ needed for $\hat\xi^{BG}_k$. Notice that we input the true value of the variance, and hence we are giving additional information to the Bayesian setting, opposed to $\hat\xi_k^P$, where we make no such assumptions and we use the combination with $\lambda=1$. Additional studies (which we omit here) show that if the Bayesian variance is not correctly specified (for instance, set at $1$ or $0.5$), the Bayesian solution behaves almost identically to the censored Hill estimator $\hat\xi_k^{MLE}$. Also notice the misspecification of the Gamma prior in the derivation of $\hat\xi^{BG}_k$ with respect to the Gaussian distribution from which the expert information is actually simulated. Using a Gaussian prior would not only make explicit posterior formulas not available (and hence the need to resort to MCMC sampling methods as Gibbs sampling), but would add more information than what we have assumed is available throughout the paper (we have not even assumed knowledge of the variance). We then plot the empirical bias and MSE of each resulting estimator as a function of $k$ (comparing the estimates with respect to the true value). We write expressions such as Burr($\xi=1$) to indicate that the parameters of the distribution are not the focus, but rather the resulting tail index from the Hall class (which is a function of the parameters). The results are given in Figures \ref{sspareto}, \ref{ssburr} and \ref{ssfrechet}. We observe how the fact that the perturbation will affect estimation based on the proportion of censored observations as opposed to the total amount of data-points performs well for $k>10$. As a result, a substantial amount of bias and MSE is removed. This is especially the case for the heavy tail case $\xi=1$. For the lighter tail $\xi=0.5$, the perturbed estimator has either the best or second best performance bias-wise, and its only major drawback is the MSE for the exact Pareto case for $k>50$, where the Bayesian gamma solution performs even worse. When considering quantiles, the perturbed estimator behaves better than the Hill estimator and on par with the other two benchmarks for the heavy tail exact Pareto case. In the lighter tail case it performs the worst for large order statistics, recovering and behaving as in the previous case for $k<60$. In all other non-exact Pareto tail cases, the perturbation was superior to all methods. Notice that one assumption made in this study was that the expert guess was centered and with relatively good quality (mean $\xi$ and standard deviation $0.2$). If the latter conditions are changed, it is easy to construct a simulation study where both the perturbed and the Bayesian gamma solution perform much worse. Consequently, the findings of this simulation study suggest that insurers that are very confident in their expert opinions might benefit from using the combination estimator $\hat\xi_k^P$ with $\lambda=1$. \begin{remark}\normalfont For the adaptive selection of $\lambda$, the procedure of cross-validation may naturally come to mind. However, the latter is based on averages rather than extremes. For instance, in a 10-fold cross-validation, the 9 parts of the data that do not contain the maximum will tend to prefer lighter tail indices, and only the one part with the maximum will suggest a heavy tail index, so that the overall index will be underestimated. Correspondingly, cross-validation is not a method of choice in this context. \end{remark} \begin{figure}[] \centering \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{ss_exact.jpeg} \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{ss_exact2.jpeg} \caption{Bias and (log) Mean Square Error for the exact Pareto distribution, for varying parameters. We compare $\hat\xi_k^P$ (orange, solid), $\hat\xi_k^{MLE}$ (red, dotted), $\hat\xi_k^{BG}$ (blue, dashed) and $\hat\xi_k^{BM}$ (purple, dashed and dotted), as well as the associated Weissman quantile estimator.} \label{sspareto} \end{figure} \begin{figure}[] \centering \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{ss_burr.jpeg} \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{ss_burr2.jpeg} \caption{Bias and (log) Mean Square Error for the Burr distribution, for varying parameters. We compare $\hat\xi_k^P$ (orange, solid), $\hat\xi_k^{MLE}$ (red, dotted), $\hat\xi_k^{BG}$ (blue, dashed) and $\hat\xi_k^{BM}$ (purple, dashed and dotted), as well as the associated Weissman quantile estimator.} \label{ssburr} \end{figure} \begin{figure}[] \centering \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{ss_frechet.jpeg} \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{ss_frechet2.jpeg} \caption{Bias and (log) Mean Square Error for the Frechet distribution, for varying parameters. We compare $\hat\xi_k^P$ (orange, solid), $\hat\xi_k^{MLE}$ (red, dotted), $\hat\xi_k^{BG}$ (blue, dashed) and $\hat\xi_k^{BM}$ (purple, dashed and dotted), as well as the associated Weissman quantile estimator.} \label{ssfrechet} \end{figure} \subsection{Insurance Data} We consider a dataset from Motor Third Party Liability Insurance (MTPL) from a direct insurance company operating in the EU (cf. \cite[Sec.1.3.1]{abt}), consisting of yearly paid amounts to policyholders during the period 1995-2010. At 2010 we have roughly $60\%$ right-censored (open) observations out of the total 837 claims. The data are reported as soon as the incurred value exceeds the reporting threshold given in Figure 1.2 in \cite{abt}, and the histogram of the IBNR delays is given in Figure 1.3 in \cite{abt}. We also have an \textit{ultimate} estimate which is the company's expert estimation of the eventual size of the claim. In Figure \ref{Description} we have several descriptive statistics of the data: the log-claim sizes, the Kaplan-Meier estimator of the data (cf. \cite{kaplan58}), the proportion of non-censoring (closed claims) as a function of the order statistics of the claims, and a QQ-plot for the log-claims against theoretical exponential quantiles. We observe that censoring at random is not a far-fetched assumption to make, since a horizontal behaviour of the proportion of closed claims as a function of the number of upper order statistics does not reject the possibility of the sizes of claims and the probabilities of censoring of those claims being independent. The Pareto tail behaviour of large (above 0.45 million, possibly censored) claim sizes seems to hold, based on the QQ-plot of their logarithm against theoretical exponential quantiles. Standard tests also do not reject the exponential hypothesis of the logarithm of these large claims (Kolmogorov-Smirnov p-value of 0.50). \begin{figure}[hh] \centering \includegraphics[width=13cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{Description.jpeg} \caption{Descriptive statistics of the insurance data. Top left: log-claims in order of arrival, showing both open (red, circle) and closed (black, dot) claims. Top right: Kaplan-Meier survival probability estimator for the claims. Bottom left: proportion of closed claims as a function of the top $k$ order statistics of the claim sizes. Bottom right: QQ plot of the logarithm of the claims larger than 0.54 million euro, against the theoretical exponential quantiles with the same mean.} \label{Description} \end{figure} Now we would like to know how the ultimates can help to estimate the tail parameter. In \cite{bladt}, the ultimates of this dataset were explored, where it was observed that they are Pareto in the tail. Furthermore, using developments in threshold selection, using trimming techniques, it was shown that $\xi=0.48$ is a good estimate for the heaviness of the tail of the ultimates. In Figure \ref{Hill_plot} we show the Hill plot for the ultimates, together with the chosen expert $\xi$ value, and the censored Hill $\hat\xi_k^{MLE}$ and the perturbed version $\hat\xi_k^P$ with $\lambda=1$ and $\beta=1/0.48$. Notice that in this case, we know how $\beta$ is obtained, and this additional knowledge could be useful. However, our method does not assume any specific structure, which means that any other method can be used to obtain $\beta$, and we merely give the current one for the sake of example. We observe a particularly stable region when $k$ is between $20$ and $70$, which suggests a heavier tail (roughly $0.65$) than the ultimates alone predict. We will see how this affects the quantiles alike. \begin{figure}[] \centering \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{Hill_plot.jpeg} \caption{Hill plot of the ultimates (black, dashed), censored Hill estimator $\hat\xi_k^{MLE}$ for the claims (red, dashed), and the combined estimator $\hat\xi_k^P$ (orange, solid) with $\lambda=1$ and $\beta=1/0.48$.} \label{Hill_plot} \end{figure} As a way of validating our estimation procedure we perform the following check. We consider all claims arriving in the shorter period of time $1995$-$2000$ and we follow exclusively these $310$ older claims until $2010$. The proportion of censoring at $2010$ drops to roughly $29.5\%$. We examine the censored Hill estimator and perturbed estimator (using the same $\lambda=1$ and $1/\beta=0.48$ as before) for this reduced data, and plot it in Figure \ref{Hill_plot_verif}, together with the corresponding estimators using the full data which we had previously obtained. We observe that the censored Hill estimator for the reduced data dropped its value in the most stable region by about $0.2$, almost reaching the perturbed estimator for both the complete and reduced datasets, showing that as the proportion of censoring decreases, the estimators come closer together. Notice that the perturbed estimator remained surprisingly stable, even when the penalization parameter stayed at the same value but the sample size decreased, due to the fact that the proportion of censored claims controlled the strength of the penalization in a natural way. Finally, we add the corresponding analysis of the $99.5\%$ quantile (which is relevant for Value-at-Risk considerations) for the case where the expert quantile information is given by the empirical distribution function of the ultimates, and combine the Hill estimator and the expert information by means of Equation \eqref{perturbed_quantile}. That is, \begin{align*} \hat Q^{P}_k(1-p)&=\left[\hat Q^{KM}(1-k/n)\cdot\left(\frac{k}{np}\right)^{\hat \xi^{MLE}_k}\right]^{\hat p_k}\cdot \left[\hat Q^{ULT}(1-k/n)\cdot\left(\frac{k}{np}\right)^{1/\beta}\right]^{1-\hat p_k}\\ &=\left((\hat Q^{KM}(1-k/n))^{\hat p_k}(\hat Q^{EX}(1-k/n))^{1-\hat p_k}\right)^{\hat\xi^P_k}, \end{align*} where $\hat p_k=\frac 1k \sum_{i=1}^k \ee^{(i)}$, $\hat Q^{KM}$ is the quantile function associated with the Kaplan-Meier curve of the claims, and $\hat Q^{ULT}$ is the quantile function associated with the empirical distribution function of the ultimates. \noindent The quantile coming from the ultimates alone is given by \begin{align*} \hat Q^{ULT}_k(1-p)=\hat Q^{ULT}(1-k/n)\cdot\left(\frac{k}{np}\right)^{H^U_k}, \end{align*} where $H^U_k$ is the Hill estimator of the ultimates. Finally, without any expert information (ignoring the ultimates), the quantile is given by \begin{align*} \hat Q^{KM}_k(1-p)=\hat Q^{KM}(1-k/n)\cdot\left(\frac{k}{np}\right)^{\hat \xi^{MLE}_k}. \end{align*} \noindent Note that, due to missing IBNR data at the later accident years, some care is needed concerning the interpretation of these quantile estimates as the outcome levels which are exceeded in $100 \times 0.5 \%$ {\it of the reported cases.} However, as these IBNR data concern smaller losses, the influence of these omissions is limited as can be verified by restricting the proposed approach to the claims from earlier accident years and comparing with the present results. The result is gathered as a function of the number $k$ of upper order statistics in Figure \ref{VaR}. The combined estimation of the high quantile results is a stable compromise between the under-estimated quantiles from the expert opinions and the pure Weissman approach, which has higher variability. Such under-estimation of the size of the claims at closure by the ultimates was also observed empirically while exploring the data (details are omitted). Observe also Figure \ref{VaR_verif}, where the reduction of the data which was used above to validate the procedure was applied to the quantiles, and an analogous interpretation applies. This suggests that the current reserving could benefit from a re-evaluation. However, this analysis is made without knowing the actual process behind the calculation of the ultimates, and a deeper understanding of this process could in the future elucidate whether there is something being overlooked by the experts or by the statisticians. \begin{figure}[] \centering \includegraphics[width=12cm,trim=0.5cm 0.5cm 0.5cm 0.5cm,clip]{Hill_plot_verif.jpeg} \caption{Hill plot of the ultimates (black), for the reduced (solid) and complete (dashed) datasets: censored Hill estimator $\hat\xi_k^{MLE}$ for the claims (red), and the combined estimator $\hat\xi_k^P$ (orange) with $\lambda=1$ and $\beta=1/0.48$.} \label{Hill_plot_verif} \end{figure} \begin{figure}[] \centering \includegraphics[width=12cm,trim=.5cm .5cm 0cm .5cm,clip]{VaR.jpeg} \caption{$99.5\%$ quantile estimator using the censored approach ($\hat Q_k^{KM}(0.005)$, red) for the claims, expert information ($\hat Q_k^{ULT}(0.005)$, black) and their combination via $\hat \xi^{P}_u$, with the selection $\lambda=1$ ($\hat Q^P_k(0.005)$, orange).} \label{VaR} \end{figure} \begin{figure}[] \centering \includegraphics[width=12cm,trim=.5cm .5cm 0cm .5cm,clip]{VaR_verif.jpeg} \caption{ Plot of the $99.5\%$ quantile for the ultimates $\hat Q^{ULT}_k$ (black), and for the reduced (solid) and complete (dashed) datasets: censored Hill estimation $\hat Q_k^{KM}$ for the claims (red), and the combined estimator $\hat Q_k^P$ (orange) with $\lambda=1$ and $\beta=1/0.48$.} \label{VaR_verif} \end{figure} \section{Conclusion}\label{conclusion_sect} We have derived a flexible estimator that bridges statistical theory and practice when it comes to tail estimation. The results also apply to adaptation of quantile estimation techniques both when more expert information is available (for instance when an expert cumulative distribution function is available) and also when it is lacking. Like in Bayesian statistics, the strength of the belief of the expert is often subjective and in many cases unquantifiable, especially when provided with a single point estimate. As discussed in the paper, our method is in fact closely related to Bayesian techniques, but it is driven by the proportion of censoring, rather than by the number of total observations. The developed estimator represents a statistically sound method for making a compromise between expert information and likelihood methods, without the need of any additional prior assumptions, and its performance depends on the quality of the expert guess. In particular, we suggested a convenient approach to avoid selection of a tuning parameter for the linking of expert information and Hill estimation. The methods developed can readily be adapted for the selection of the tuning parameter using more complex methods (such as moment matching) whenever there is more expert information available than presently assumed. For heavy tails, the estimator is shown to be asymptotically normal, and has further desirable properties when the tuning parameter is chosen to be $1$. Indeed, Theorem \ref{lambdaone} can serve as a simple rule of thumb in practice for combining the two sources of information, and suggests that using good quality expert information can reduce the variance while keeping the bias at bay. This rule appears to be rather natural, and the approach in this paper enables to embed this intuitive combination within the theory of perturbed likelihood estimation. A more detailed analysis would depend on the specific application at hand, and on the quantifiability of the strength of beliefs, which in the present liability insurance dataset, and more generally in any analysis made by statisticians without the experts present, is commonly lacking. A simulation study showed that when the guess is close to being correct, the estimator fares very favorably, compared to the Hill estimator and two recently proposed Bayesian solutions. Moreover, the estimator seems to be quite stable with respect to the chosen threshold, which is of particular interest since the choice of an appropriate threshold is a classical problem in extreme value analysis. Concerning quantiles, and for the simulated examples, the estimator was favorable to all the benchmarks for virtually all sample fractions for non-exact Pareto tails. Trimming techniques have recently been proposed to address threshold selection, and a future line of research will be to consider lower-trimmed versions of the proposed perturbed estimator to aid in the visual and automatic sample fraction selection. Finally, the application of the method to actual motor third-party liability liability insurance data illustrates that decision makers with a strong belief in a point estimate of the tail parameter could be less reluctant to use the tail parameter and quantiles suggested by the inclusion of data-points proposed by our method than the one from the pure censored Hill estimation of the data. \\ \textbf{Acknowledgement.} H.A. acknowledges financial support from the Swiss National Science Foundation Project 200021\_168993.
1302.6369
\section{The main result} A Banach space $X$ is said to have the \emph{Schur property} if any weakly null sequence in $X$ converges to zero in norm. Equivalently, $X$ has the Schur property if every weakly Cauchy sequence is norm Cauchy. The classical example of a space with the Schur property is the space $\ell_1$ of all absolutely summable sequences. A quantitative version of the Schur property was introduced and studied in \cite{qschur}. Let us recall the definition. If $(x_k)$ is a bounded sequence in a Banach space $X$, we set (following \cite{qschur}) $$\ca{x_k}=\inf_{n\in\en}\diam\{x_k:k\ge n\}$$ and $$\de{x_k}=\sup_{x^*\in B_{X^*}} \inf_{n\in\en} \diam\{x^*(x_k):k\ge n\}.$$ Then the quantity $\ca{\cdot}$ measures how far the sequence is from being norm Cauchy, while the quantity $\de{\cdot}$ measures how far it is from being weakly Cauchy. It is easy to check that the quantity $\de{x_k}$ can be alternatively described as the diameter of the set of all weak* cluster points of $(x_k)$ in $X^{**}$. Following again \cite{qschur}, a Banach space $X$ is said to have the \emph{$C$-Schur property} (where $C\ge 0$) if \begin{equation} \label{eq:qsch1} \ca{x_k}\le C \de{x_k} \end{equation} for any bounded sequence $(x_k)$ in $X$. Since obviously $\de{x_k}\le \ca{x_k}$ for any bounded sequence $(x_k)$, necessarily $C\ge 1$ (unless $X$ is the trivial space). Moreover, if $X$ has the $C$-Schur property for some $C\ge 1$, it easily follows that $X$ has the Schur property. Indeed, if $(x_k)$ is weakly Cauchy in $X$, then $\de{x_k}=0$, and thus $\ca{x_k}=0$. The space constructed in \cite[Example 1.4]{qschur} serves as an example of a Banach space with the Schur property without the $C$-Schur property for any $C>0$. On the other hand, $\ell_1(\Gamma)$ possesses the $1$-Schur property (see \cite[Theorem~1.3]{qschur}). Our main result is the following generalization of the quoted theorem. \begin{thm} \label{t:c01} Let $X$ be a subspace of $c_0(\Gamma)$. Then $X^*$ has the $1$-Schur property. \end{thm} Let us now proceed to the proof of the main result. We will need some lemmas. The first one establishes a special property of the norm on $c_0(\Gamma)$ and its subspaces. \begin{lemma}\label{m1} Let $X$ be a subspace of $c_0(\Gamma)$. Then for any $x^*\in X^*$ and any sequence $(x_n^*)$ in $X^*$ which weak$^*$ converges to $0$ we have $$\limsup\|x_n^*+x^*\|=\|x^*\|+\limsup\|x_n^*\|.$$ \end{lemma} \begin{proof} Let us first suppose that $X$ is separable. It is obvious that for any $x\in X$ and any weakly null sequence $(x_n)$ in $X$ we have $$\limsup\|x_n+x\|=\max (\|x\|,\limsup\|x_n\|).$$ The assertion then follows from \cite[Theorem 2.6]{KaWe} (applied for $p=\infty$). The general case follows by a separable reduction argument. Suppose that $x^*\in X^*$ and that $(x_n^*)$ is a weak* null sequence in $X^*$. Let us consider the countable set $$A=\{x^*\}\cup\{x_n^*:n\in\en\}\cup\{x_n^*+x^*:n\in\en\}.$$ We can find a separable subspace $Y\subset X$ such that for each $y^*\in A$ we have $\|y^*\|=\|y^*|_Y\|$. Then the assertion follows immediately from the separable case. \end{proof} The next one is a stronger variant of \cite[Lemma~1.7]{brown} or \cite[Lemma 2.3]{KaWe} for the special case of subspaces of $c_0(\Gamma)$. \begin{lemma} \label{l:c01} Let $X$ be a subspace of $c_0(\Gamma)$ and $(x_n^*)$ be sequence in $X^*$ weak$^*$ converging to $x^*$. Then for any finite dimensional subspace $F\subset X^*$ we have \[ \liminf \dist(x_n^*, F)\ge\liminf\|x_n^*\|-\|x^*\|. \] \end{lemma} \begin{proof} Let $c>\liminf \dist(x_n^*, F)$ be arbitrary. By passing to a subsequence we may assume that $\dist(x_n^*, F)<c$ for each $n\in\en$. We can thus find a sequence $(y_n^*)$ in $F$ such that $\|x_n^*-y_n^*\|<c$ for each $n\in\en$. Since the sequence $(x_n^*)$ is bounded, the sequence $(y_n^*)$ is bounded as well. Therefore we can, up to passing to a subsequence, suppose that the sequence $(y_n^*)$ converges in norm to some $y^*\in F$. Then $$\begin{aligned}c&\ge\limsup\|x_n^*-y_n^*\|=\limsup\|x_n^*-y^*\| =\limsup\|(x_n^*-x^*)+(x^*-y^*)\| \\ &=\limsup\|x_n^*-x^*\|+\|x^*-y^*\| \ge\limsup\|x_n^*\|-\|x^*\|+\|x^*-y^*\| \\& \ge \liminf\|x_n^*\|-\|x^*\|.\end{aligned}$$ The first equality follows from the fact that the sequence $(y_n^*)$ converges to $y^*$ in the norm, the third one follows from Lemma~\ref{m1}. The remaining steps are trivial. This completes the proof. \end{proof} The next lemma is a refinement of constructions from \cite[Lemma~2.1]{qschur} and \cite[Theorem~1.1]{brown}. During its proof we will use the following notation: if $x\in c_0(\Gamma)$ or $x\in \ell_1(\Gamma)$ and $A\subset \Gamma$, then $x\r_A$ denotes an element defined as \[ (x\r_A)(\gamma)=\begin{cases} x(\gamma), &\gamma\in A,\\ 0,& \gamma\in \Gamma\setminus A. \end{cases} \] \begin{lemma}\label{l:c02} Let $X$ be a subspace of $c_0(\Gamma)$, $c>0$ and $(y_n)$ be a sequence in $\ell_1(\Gamma)=c_0(\Gamma)^*$ such that \begin{itemize} \item $(y_n)$ weak$^*$ converges to $0$ in $\ell_1(\Gamma)$, \item $\|y_n|_X\|>c$ for each $n\in\en$. \end{itemize} Then for any $\eta>0$ there is a subsequence $(y_{n_k})$ such that each weak$^*$ cluster point of $(y_{n_k}|_X)$ in $X^{***}$ has norm at least $c-\eta$. \end{lemma} \begin{proof} For $n\in\en$ set $\varphi_n=y_n|_X$. Let $\ep\in (0,\frac{c}{6})$ be arbitrary. Without loss of generality, we may assume that $\ep<1$. We select strictly positive numbers $(\ep_k)$ such that $\sum_{k=1}^\infty \ep_k<\ep$. We inductively construct elements $x_k\in X$, indices $n_1<n_2<\cdots$ and finite sets $\emptyset=\Gamma_0\subset\Gamma_1\subset\Gamma_2\subset\cdots\subset \Gamma$ such that, for each $k\in\en$, \begin{enumerate} \item [(a)] $\|x_k\|\le 1$, $x_k\r_{\Gamma_{k-1}}=0$ and $\|x_k\r_{\Gamma\setminus \Gamma_k}\|<\ep_k$, \item [(b)] $|\varphi_{n_k}(x_k)|>c-\ep$ and $|\varphi_{n_k}(\sum_{i=1}^{k-1} x_i)|\le\ep\cdot \|\sum_{i=1}^{k-1} x_i\|$, \item [(c)] if we denote $y_{n_k}^{1}=y_{n_k}\r_{\Gamma_k}$ and $y_{n_k}^{2}=y_{n_k}\r_{\Gamma\setminus \Gamma_k}$, then $\|y_{n_k}^{2}\|<\ep_k$. \end{enumerate} In the first step, we set $\Gamma_0=\emptyset$ and $n_1=1$. Since $\|\varphi_{n_1}\|>c$, there is $x_1\in B_X$ with $|\varphi_{n_1}(x_1)|>c$. Let us choose a finite set $\Gamma_1\subset \Gamma$ satisfying \[ \|x_1\r_{\Gamma\setminus \Gamma_1}\|<\ep_1\quad\text{and}\quad \|y_{n_1}\r_{\Gamma\setminus \Gamma_1}\|<\ep_1. \] Since the second requirement in (b) is vacuous, the first step is finished. Assume now that we have found indices $n_1<\cdots <n_k$, finite sets $\emptyset=\Gamma_0\subset \cdots\subset \Gamma_{k}$ and elements $x_1,\dots x_k$ satisfying (a), (b) and (c). We define an operator $R_k:X\to c_0(\Gamma)$ as \[ R_kx=x\r_{\Gamma_k},\quad x\in X. \] Then $\Ker R_k$ is of finite codimension, and thus $F_k=(\Ker R_k)^\perp$ is a finite dimensional space in $X^*$. Let $m\in\en$ be chosen such that, for each $n\ge m$, \begin{itemize} \item $|\varphi_n(\sum_{i=1}^{k-1} x_i)|\le\ep\cdot \|\sum_{i=1}^{k-1} x_i\|$, and \item $\dist (\varphi_n, F_k)>c-\ep$. \end{itemize} (The first requirement can be fulfilled due to the fact that $(\varphi_n)$ converges weak$^*$ to $0$, and the second one due to Lemma~\ref{l:c01}.) Let $n_{k+1}=m$ and \[ x_{k+1}\in (F_k)_\perp=\Ker R_k \] be chosen such that $\|x_{k+1}\|\le 1$ and \[ \varphi_{n_{k+1}}(x_{k+1})>c-\ep \] (we use the fact that $X^*/F_k=((F_k)_\perp)^*$). We find a finite set $\Gamma_{k+1}\supset \Gamma_{k}$ satisfying \[ \|x_{k+1}\r_{\Gamma\setminus\Gamma_{k+1}}\|<\ep_{k+1}\quad\text{and}\quad\|y_{n_{k+1}}\r_{\Gamma\setminus\Gamma_{k+1}}\|<\ep_{k+1}. \] This finishes the construction. For $J\in\en$, let \[ u_J=\sum_{i=1}^J x_i. \] It follows from (a) that, for each $k\in\en$ and $J>k$, we have \begin{equation} \label{e:c05.5} \left\|\sum_{i=1}^k x_i\right\|<1+\ep,\quad \left\|\;\sum_{i=1}^{k-1} x_i\right\|< 1+\ep,\quad \left\|\;\sum_{i=k+1}^{J} x_i\right\|<1+\ep. \end{equation} Indeed, for $k\in\en$ and $\gamma\in \Gamma_k\setminus\Gamma_{k-1}$, we have from (a) \[ |x_j(\gamma)|\le\begin{cases} \ep_j,& j<k,\\ 1,& j=k,\\ 0,& j>k, \end{cases} \quad j\in\en. \] Further, $x_k$ is bounded by $\ep_k$ on $\Gamma\setminus \bigcup_{k=1}^\infty\Gamma_k$ by (a). This observations verify \eqref{e:c05.5}. For each $k\in\en$, we set \[ \varphi_{n_k}^{1}=y_{n_k}^{1}\r_X\quad\text{and}\quad \varphi_{n_k}^{2}=y_{n_k}^{2}\r_X. \] For a fixed index $k\in\en$ and arbitrary $J>k$, we need to estimate \begin{equation} \label{e:c06} |\varphi_{n_k}(u_J)|=\left|\varphi_{n_k}\left(\sum_{i=1}^{k-1} x_i\right)+\varphi_{n_k}(x_k)+\varphi_{n_k}\left(\sum_{i=k+1}^J x_i\right)\right|. \end{equation} The condition (b) and \eqref{e:c05.5} ensures that \begin{equation} \label{e:c08} \left|\varphi_{n_k}\left(\sum_{i=1}^{k-1} x_i\right)\right|\le\ep\cdot\left\|\;\sum_{i=1}^{k-1} x_i\right\|<\ep (1+\ep). \end{equation} From (b) we also have \begin{equation} \label{e:c09} \aligned |\varphi_{n_k}(x_{k})|>c- \ep. \endaligned \end{equation} Finally, (a) and (c) give \begin{equation} \label{e:c010} \aligned \left|\;\varphi_{n_k}\left(\sum_{i=k+1}^J x_i\right)\right|&=\left|\left(\varphi_{n_k}^{1}+\varphi_{n_k}^{2}\right)\left(\sum_{i=k+1}^J x_i\right)\right|\\ &=\left|\;y_{n_k}^{2}\left(\sum_{i=k+1}^J x_i\right)\right|\le \ep_{k}\cdot\left\|\;\sum_{i=k+1}^J x_i\right\|\\ &<\ep_k(1+\ep). \endaligned \end{equation} Using \eqref{e:c08}--\eqref{e:c010} in \eqref{e:c06}, we get \begin{equation} \label{e:c011} \aligned |\varphi_{n_k}(u_J)|&\ge c-\ep -\ep(1+\ep)-\ep_k(1+\ep)\\ &\ge c-\ep(3+2\ep)\ge c -5\ep. \endaligned \end{equation} It follows from \eqref{e:c011} that, for $z_J=(1+\ep)^{-1}u_J$, we have $z_J\in B_X$ by \eqref{e:c05.5} and \[ |\varphi_{n_k}(z_J)|>(1+\ep)^{-1}\left(c-5\ep\right),\quad k\in\en, J>k. \] Let $z^{**}\in B_{X^{**}}$ be a weak$^*$ cluster point of $(z_J)$. Then \begin{equation} \label{e:c012} |\varphi_{n_k}(z^{**})|\ge (1+\ep)^{-1}\left(c-5\ep\right),\quad k\in\en. \end{equation} It follow that each weak$^*$ cluster point of $(\varphi_{n_k})$ has norm at least $(1+\ep)^{-1}(c-5\ep)$. This completes the proof, as given $\eta>0$, we can in the beginning choose $\ep$ such that $$(1+\ep)^{-1}\left(c-5\ep\right)>c-\eta.$$ \end{proof} Now we are ready to prove the theorem: \begin{proof}[Proof of Theorem~\ref{t:c01}] Let $X$ be a subspace of $c_0(\Gamma)$ and $(x_n^*)$ be a sequence in $X^*$ bounded by a constant $M$. We consider arbitrary $0<c<\ca{x_n^*}$. We extract subsequences $(a_n)$ and $(b_n)$ from $(x_n^*)$ such that \begin{equation} \label{e:c00} c<\|a_n-b_n\|,\quad n\in\en. \end{equation} We denote $\varphi_n=a_n-b_n$, $n\in\en$. We extend $a_n$ to $A_n\in\ell_1(\Gamma)$ and $\varphi_n$ to $z_n\in\ell_1(\Gamma)$ with preservation of the norm and set $B_n=A_n-z_n$. Then $B_n$ is an extension of $b_n$ (not necessarily preserving the norm). By passing to a subsequence if necessary, assume that $(A_n)$ converges pointwise (and hence weak$^*$ in $\ell_1(\Gamma)$) to some $A\in\ell_1(\Gamma)$ and $(B_n)$ converges pointwise to some $B\in \ell_1(\Gamma)$. (This is possible due to the fact that any sequence in $\ell_1(\Gamma)$ can be viewed as a sequence in $\ell_1(\Gamma')$ for a countable $\Gamma'\subset \Gamma$.) Then $(z_n)$ weak$^*$ converges to $A-B$. Set $y_n=z_n-A+B$ for $n\in\en$. Then $(y_n)$ weak$^*$ converges to $0$ and $\|y_n|_X\|>c-\|(A-B)|_X\|$ for each $n\in\en$. Let $\ep>0$ be arbitrary. By Lemma~\ref{l:c02}, there is a subsequence $(y_{n_k})$ such that each weak$^*$ cluster point of $(y_{n_k}|_X)$ in $X^{***}$ has norm at least $$c-\|(A-B)|_X\|-\ep.$$ Let $a$ be a weak$^*$ cluster point of $(a_{n_k})$ in $X^{***}$. Let $(a_\tau)$ be a subnet of $(a_{n_k})$ weak$^*$ converging to $a$. Let $b$ be a weak$^*$ cluster point of the net $(b_\tau)$. Then $a$ and $b$ are weak$^*$ cluster points of $(x_n^*)$ in $X^{***}$. Obviously $a|_X=A|_X$ and $b|_X=B|_X$ and, moreover, $a-b-(a-b)|_X=a-b-(A-B)|_X$ is a weak$^*$ cluster point of $(y_{n_k}|_X)$ in $X^{***}$. Thus $$\|a-b-(a-b)|_X\|\ge c-\|(A-B)|_X\|-\ep.$$ Further, let $F\in (\ell_\infty(\Gamma))^*=c_0(\Gamma)^{***}$ be an extension of $a-b$ with preserving the norm. Then $$\begin{aligned}\|a-b\|&=\|F\|=\|F|_{c_0(\Gamma)}\|+\|F-F|_{c_0(\Gamma)}\| \ge \|F|_X\|+\|(F-F|_{c_0(\Gamma)})\r_{X^{**}}\|\\ &=\|(A-B)|_X\|+\|a-b-(a-b)|_X\|\\&\ge \|(A-B)|_X\|+c-\|(A-B)|_X\|-\ep\\ &=c-\ep. \end{aligned}$$ (Let us remark that, for a Banach space $Y$ and $G\in Y^{***}$, we denote by $G|_Y$ the respective element of $Y^*$ canonically embedded into $Y^{***}$.) It follows that $\de{x_k^*}\ge c-\ep$. Since $\ep>0$ is arbitrary, $\de{x_k^*}\ge c$. Hence $\ca{x_k^*}\le \de{x_k^*}$ and the proof is completed. \end{proof} \section{Quantitative Schur property and quantitative Dunford-Pettis property} It is well known that the Schur property is closely related to the Dunford-Pettis property. Recall that a Banach space $X$ is said to have the \emph{Dunford-Pettis property} if for any Banach space $Y$ every weakly compact operator $T:X\to Y$ is completely continuous. Let us further recall that $T$ is \emph{weakly compact} if the image by $T$ of the unit ball of $X$ is relatively weakly compact in $Y$, and that $T$ is \emph{completely continuous} if it maps weakly convergent sequences to norm convergent ones, or, equivalently, if it maps weakly Cauchy sequence to norm Cauchy (hence norm convergent) ones. Obviously, any Banach space with the Schur property has the Dunford-Pettis property. Further, any Banach space whose dual has the Schur property enjoys the Dunford-Pettis property as well. Quantitative variants of the Dunford-Pettis property were studied in \cite{kks-adv} where two strengthenings of the Dunford-Pettis property in a quantitative way were introduced (\emph{direct quantitative Dunford-Pettis property} and \emph{dual quantitative Dunford-Pettis property}, see \cite[Definition~5.6]{kks-adv}). Section 6 of \cite{kks-adv} shows several relations between the Schur property and the two variants ot the quantitative Dunford-Pettis properties. In this section we focus on the relationship of the quantitative Schur property and quantitative versions of the Dunford-Pettis property. The unexplained notation and notions in this section are taken from \cite{kks-adv}. More specifically, the quantities $\ca[\rho^*]{\cdot}$ and $\ca[\rho]{\cdot}$ measure how far the given sequence is from being Cauchy in the Mackey topology of $X^*$ or the restriction to $X$ of the Mackey topology of $X^{**}$, respectively. The quantity $\wde{\cdot}$ is defined by taking infimum of $\de{\cdot}$ over all subsequences. Similarly for $\wca{\cdot}$, $\wca[\rho^*]{\cdot}$ and $\wca[\rho]{\cdot}$. These quantities are defined and described in detail in \cite[Section 2.3]{kks-adv}. Further, $\dh(\cdot,\cdot)$ is the non-symmetrized Hausdorff distance, $\chi(\cdot)$ denotes the Hausdorff measure of norm non-compactness, $\omega(\cdot)$ and $\wk{\cdot}$ are measures of weak non-compactness; see \cite[Section 2.5]{kks-adv}. To apply a measures of (weak) non-compactness to an operator means to apply it to the image of the unit ball (see \cite[Section 2.6]{kks-adv}). Finally, the quantity $\cc{\cdot}$ measures how far the given operator is from being completely continuous, i.e. if $T:X\to Y$ is an operator, then $$\cc{T}=\sup\{\ca{Tx_k}: (x_k)\mbox{ is a weakly Cauchy sequence in }B_X\},$$ see \cite[Section 2.4]{kks-adv}. It is obvious that a Banach space $X$ with the Schur property possesses also the direct quantitative Dunford-Pettis property (see \cite[Proposition 6.2]{kks-adv}). If we assume that $X$ has a $C$-Schur property, we get the following result. \begin{thm} \label{p:qsch1} Let $X$ be a Banach space with the $C$-Schur property where $C>0$. \begin{itemize} \item[(i)] It holds $\ca[\rho]{x_n}\le C\de{x_n}$ for any bounded sequence $(x_n)$ in $X$. In particular, $X$ has both the direct and the dual quantitative Dunford-Pettis properties. \item[(ii)] The space $X$ satisfies the following stronger version of the dual quantitative Dunford-Pettis property: If $A\subset X$ is a bounded set, then \begin{equation} \label{eq:qsch1.1} \wk{A}\le \omega(A)=\chi(A)\le 2C\wk{A}. \end{equation} \end{itemize} \end{thm} \begin{proof} The inequality in assertion (i) follows from the fact that $\ca[\rho]{x_n}\le\ca{x_n}$ for any bounded sequence $(x_n)$ in $X$ (this is an immediate consequence of definitions). Thus $X$ satisfies condition (iv) of \cite[Theorem 5.5]{kks-adv}, i.e., $X$ possesses the dual quantitative Dunford-Pettis property. Further, from \cite[Proposition~6.2]{kks-adv} we know that $X$ has the direct quantitative Dunford-Pettis property. (ii) First we notice that \eqref{eq:qsch1.1} is indeed a stronger version of the dual quantitative Dunford-Pettis property. Indeed, using \cite[diagramm (3.1) and formula (2.6)]{kks-adv} one can deduce from \eqref{eq:qsch1.1} the validity of condition (i) of \cite[Theorem 5.5]{kks-adv}. For the proof of \eqref{eq:qsch1.1}, let $A$ be a bounded set in $X$. If $(x_k)$ in $X$ is a bounded sequence, by taking consecutively infima in \eqref{eq:qsch1} over all subsequences we obtain \begin{equation} \label{eq:qsch2} \wca{x_k}\le C \wde{x_k}. \end{equation} By \cite[Theorem~1]{wesecom}, \begin{equation} \label{eq:wesecom} \wde{x_k}\le 2\dh (\clu{X}{x_k},X) \end{equation} for any bounded sequence $(x_k)$ in an arbitrary Banach space, and thus \eqref{eq:wesecom} together with \eqref{eq:qsch2} yield \begin{equation}\label{eq:wca} \wca{x_k}\le 2C\dh (\clu{X}{x_k},X). \end{equation} Since obviously (cf. \cite[inequalities (2.2)]{kks-adv}) \[ \chi(A)\le \sup\{\wca{x_k}: (x_k)\text{ is a sequence in }A\}, \] \eqref{eq:wca} yields \begin{equation}\label{eq:wca1} \chi(A)\le 2C\wk{A}. \end{equation} Since $X$ has the $C$-Schur property, it has the Schur property, and thus any weakly compact subset of $X$ is norm compact. Hence \begin{equation} \label{eq:qsch5} \chi(A)=\omega(A). \end{equation} A consecutive use of \cite[inequality (2.4)]{kks-adv}, \eqref{eq:qsch5}, and \eqref{eq:wca1} gives \[ \wk{A}\le \omega(A)=\chi(A)\le 2C\wk{A}, \] which is the inequality \eqref{eq:qsch1.1}. \end{proof} If the dual $X^*$ of a Banach space $X$ possesses the Schur property, then we have by \cite[Theorem~6.3]{kks-adv} that $X$ has the dual quantitative Dunford-Pettis property and, moreover, for any Banach space $Y$ and an operator $T:X\to Y$ the following inequalities hold: \begin{equation} \label{eq:qsch7} \wk[Y]{T}\le \omega(T)\le\chi(T)\le\cc{T}\le 2\omega(T^*)=2\chi(T^*)\le 4\chi(T). \end{equation} Thus the quantities $\chi(T)$, $\cc{T}$, $\chi(T^*)$ and $\omega(T^*)$ are equivalent in this case. However, the quantities $\omega(T)$ and $\wk[Y]{T}$ need not be in this case equivalent with the others, i.e., $X$ need not have the direct quantitative Dunford-Pettis property, see \cite[Example~10.1]{kks-adv}. However, if we assume that $X^*$ has a quantitative version of the Schur property, we obtain that, for an operator $T$ with domain $X$, that the compactness (both norm and weak) of $T$ and its adjoint are quantitatively equivalent to the complete continuity of $T$. \begin{thm} \label{t:qsch1} Let $X$ be a Banach space such that $X^*$ have the $C$-Schur property for some $C\ge 0$. If $Y$ is a Banach space and $T:X\to Y$ is a bounded linear operator, we have \begin{equation} \label{eq:qsch6} \begin{aligned} \wk[Y]{T}\le \omega(T)&\le\chi(T)\le\cc{T}\\ &\le 2\omega(T^*)=2\chi(T^*)\le 4C\wk[X^*]{T^*} \le8C\wk[Y]{T}. \end{aligned} \end{equation} In particular, $X$ has both the direct and the dual quantitative Dunford-Pettis properties. \end{thm} \begin{proof} The first five inequalities are contained in \cite[Theorem 6.3(i)]{kks-adv}. By Theorem~\ref{p:qsch1} we get the sixth inequality. The last inequality follows from \cite[equation (2.8)]{kks-adv}. Further, $X^*$ has both the direct and dual quantitative Dunford-Pettis property by Theorem~\ref{p:qsch1}(i). Hence $X$ itself possesses both the direct and dual quantitative Dunford-Pettis property by \cite[Theorem~5.7]{kks-adv}. \end{proof} If we combine the previous theorem with Theorem~\ref{t:c01}, we get immeadiately. \begin{cor} Let $X$ be a subspace of $c_0(\Gamma)$. Then $X$ has both the direct and dual quantitative Dunford-Pettis properties. Moreover, the inequalities \eqref{eq:qsch6} are satisfied with $C=1$. \end{cor} In case $X=c_0(\Gamma)$ \cite[Theorem 8.2]{kks-adv} yields even stronger inequalities (with $C=1/2$). The proof of this case is done by a different method. \smallskip We continue by a characterization of spaces whose dual has the quantitative Schur property. It is well known that the dual space $X^*$ of a Banach space $X$ has the Schur property if and only if $X$ has the Dunford-Pettis property and contains no copy of $\ell_1$ (see \cite[Theorem~3]{diestel}). The following theorem quantifies this assertion. \begin{thm} \label{t:qsch-qdp} Let $X$ be a Banach space. Then $X^*$ has the quantitative Schur property if and only if $X$ has the direct quantitative Dunford-Pettis property and contains no copy of $\ell_1$. \end{thm} \begin{proof} Suppose that $X^*$ has the quantitative Schur property. Then $X$ contains no copy of $\ell_1$. Indeed, if $X$ contains an isomorphic copy of $\ell_1$, by \cite[Proposition 3.3]{pelc} the dual space $X^*$ contains an isomorphic copy of $C(\{0,1\}^\en)^*$, hence also an isomorphic copy if $C([0,1])^*$. The space $C([0,1])^*$ fails the Schur property as it contains a copy of $L^1(0,1)$. Thus $X^*$ fails the Schur property as well. Further, $X$ has the direct quantitative Dunford-Pettis property by Theorem~\ref{p:qsch1}. For the proof of the converse implication we need the following consequence of Rosenthal's $\ell_1$-theorem. \begin{lemma}\label{lm-nonell1} Let $X$ be a Banach space not containing an isomorphic copy of $\ell_1$. Then any bounded sequence $(x_n^*)$ in $X^*$ satisfies $\ca{x_n^*}\le 3\ca[\rho^*]{x_n^*}$. \end{lemma} \begin{proof} If $(x_n^*)$ is norm-Cauchy, then the inequality is obvious. So, suppose that $\ca{x_n^*}>0$ and fix any $c\in(0,\ca{x_n^*})$. Then there is a sequence of natural numbers $l_n<m_n<l_{n+1},\,n\in\en,$ and a sequence $(x_n)$ in~$B_X$ such that $|(x_{l_n}^*-x_{m_n}^*)(x_n)|>c$ for every $n\in\en$. By Rosenthal's $\ell_1$-theorem, there is a weakly Cauchy subsequence of $(x_n)$. Let us assume, without loss of generality, that $l_n=2n-1$ and $m_n=2n$ for every $n\in\en$ and that $(x_n)$ is weakly Cauchy. Since, for every $k\in\en$, the singleton $\{x_k\}$ is a weakly compact set in~$B_X$, there is some $n_k>k$ such that $|(x_{2n_k-1}^*-x_{2n_k}^*)(x_k)|<\ca[\rho^*]{x_n^*}+\frac1k$. Using this estimate and the fact that $\{\frac{x_{n_k}-x_k}2:k\in\en\}$ is a relatively weakly compact subset of $B_X$, we can write \begin{eqnarray*} c &\le& \limsup|(x_{2n_k-1}^*-x_{2n_k}^*)(x_{n_k})|\\ &\le& 2\limsup|(x_{2n_k-1}^*-x_{2n_k}^*)(2^{-1}(x_{n_k}-x_k))|+\limsup|(x_{2n_k-1}^*-x_{2n_k}^*)(x_k)|\\ &\le&2\ca[\rho^*]{x_n^*}+\limsup(\ca[\rho^*]{x_n^*}+\tfrac1k)=3\ca[\rho^*]{x_n^*}. \end{eqnarray*} This completes the proof. \end{proof} Suppose now that $X$ has the direct Dunford-Pettis property. Then there exists $C>0$ such that \[ \ca[\rho^*]{x_n^*}\le C\de{x_n^*} \] for any bounded sequence $(x_n^*)$ in $X^*$ (see \cite[Theorem~5.4(iv)]{kks-adv}). By Lemma~\ref{lm-nonell1}, \[ \ca{x_n^*}\le 3\ca[\rho^*]{x_n^*}\le 3C\de{x_n^*} \] for any bounded sequence $(x_n^*)$ in $X^*$. Hence $X^*$ has the $3C$-Schur property. \end{proof} \section{Subspaces of the space of compact operators} The space $K(\ell_2)$ of all compact operators on the Hilbert space $\ell_2$ can be viewed as a non-commutative version of $c_0$ and its dual $N(\ell_2)$, the space of all nuclear operators on $\ell_2$ equipped with the nuclear norm, can be viewed as a non-commutative version of $\ell_1$. The non-commutative versions share many properties of the commutative ones, but Schur property and Dunford-Pettis property are essentially commutative. Indeed, $N(\ell_2)$ does not have the Schur property and, moreover, $K(\ell_2)$ does not enjoy the Dunford-Pettis property. It is witnessed by the following easy example. Let $(e_n)$ denote the standard basis in $\ell_2$. Consider the operators $T_n(x)=\la x,e_1\ra e_n$, $x\in\ell_2$, and $S_n(x)=\la x,e_n\ra e_1$. These operators are rank-one operators, thus they are nuclear and hence compact. Moreover, both sequences converge weakly to $0$ both in $K(\ell_2)$ and $N(\ell_2)$. The Schur property of $N(\ell_2)$ can be disproved by observing that $\|S_n\|=\|T_n\|=\|e_1\|\|e_n\|=1$. Moreover, the failure of the Dunford-Pettis property of $K(\ell_2)$ follows by the fact that $\operatorname{Tr}(S_nT_n)=1$. This easy observation was strengthened in \cite{sa-ty}, where the authors show that a subspace of $K(\ell_p)$, the space of compact operators on $\ell_p$ enjoys the Dunford-Pettis property if and only if it is isomorphic to a subspace of $c_0$ (i.e., only in the ``commutative case''). Theorem~\ref{t:c01} enables us to complement and strengthen their result to show that such a space has automatically a quantitative Dunford-Pettis property. More precisely, we prove the following: \begin{thm} \label{t:kop} Let $X$ be a subspace of the space $K(\ell_p)$ of compact operators on $\ell_p$ where $1<p<\infty$. Then the following assertions are equivalent: \begin{enumerate} \item[(i)] $X$ has the Dunford-Pettis property. \item[(ii)] $X^*$ has the Schur property. \item[(iii)] $X$ is isomorphic to a subspace of $c_0$. Moreover, in this case, there is for each $\ep>0$ an isomorphic embedding $T:X\to c_0$ such that $\|T\|\|T^{-1}\|<4+\varepsilon$. \item[(iv)] $X^*$ has the $4$-Schur property. \item[(v)] For each Banach space $Y$ and each bounded linear operator $T:X\to Y$, the inequalities \eqref{eq:qsch6} hold with $C=4$. \item[(vi)] The space $X$ has both the dual and the direct quantitative Dunford-Pettis properties. \end{enumerate} \end{thm} \begin{proof} The implication (ii) $\Rightarrow$ (i) is well known (see \cite[Theorem 3]{diestel}). (i) $\Rightarrow$ (iii) If $X$ has the Dunford-Pettis property, it is embeddable into $c_0$ by \cite[Theorem~1]{sa-ty}. Moreover, the constant of embedding can be explicitly computed from \cite[Lemma~1 and~2]{sa-ty}. Indeed, the embedding $T:X\to c_0$ is constructed as the composition $\psi\circ\phi_A$, where $\phi_A$ is provided by \cite[Lemma~1]{sa-ty} and $\psi$ is provided by \cite[Lemma~2]{sa-ty}. The operator $\psi$ satisfies $\|\psi\|\|\psi^{-1}\|\le 4$ by \cite[p. 420]{sa-ty}. Further, $\phi_A$ satisfies $\|\phi_A\|\|\phi_A^{-1}\|\le 3$ (see the computation in \cite[p. 418]{sa-ty}), but it can be easily modified to be an almost isometry. Indeed, if we replace in \cite[formula (3) on p. 420]{sa-ty} the number $\frac14$ by $\frac\ep2$, then we will obtain $\|\phi_A\|\|\phi_A^{-1}\|\le\frac{1+\ep}{1-\ep}$. This completes the proof. The implication (iii) $\Rightarrow$ (iv) follows from Theorem~\ref{t:c01}. Indeed, let $T:X\to c_0$ be an embedding with $\|T\|=1$ and $\|T^{-1}\|\le 4+\ep$. Let $(x^*_n)$ be a bounded sequence in $X^*$. Then $((T^*)^{-1}x_n^*)$ is a bounded sequence in $(T(X))^*$ satisfying $\de{(T^*)^{-1}x_n^*}\le(4+\ep)\de{x_n^*}$. By Theorem~\ref{t:c01} we get $\ca{(T^*)^{-1}x_n^*}\le2(4+\ep)\de{x_n^*}$, hence $\ca{x_n^*}\le2(4+\ep)\de{x_n^*}$ as well. Since $\ep>0$ is arbitrary, the proof is finished. The implications (iv) $\Rightarrow$ (v) and (v) $\Rightarrow$ (vi) follows from Theorem~\ref{t:qsch1}. Finally, the implications (vi) $\Rightarrow$ (i) and (iv) $\Rightarrow$ (ii) are trivial. \end{proof} \def\cprime{$'$}
1302.6060
\section{Introduction} Based on the statistics of embedded clusters in the Galaxy, \citet{lada03} concluded that a high fraction of young proto-clusters will dissolve while only a few percent are likely to become bound clusters. Several reasons for this `infant mortality' have been brought forward, such as an expulsion of dust and gas by hot stars or supernovae \citep{tutukov78, whitworth79, goodwin06, bastian06} and dynamic effects \citep{lamers06, gieles07, wielen77}. The main effect of the dust and gas expulsion is the reduction of the total cluster potential, which makes the cluster more vulnerable to perturbations and eventual disruption. The dissolution time for clusters has been evaluated by comparing the sizes of old and young cluster populations assuming a nearly constant formation rate. This was performed for the solar neighborhood \citep{lamers05, piskunov06}, whose dissolution time $t_4^\mathrm{dis}$ was estimated to be in the range of 0.3-1.0\,Gyr for a 10$^4$\,{M$_{\sun}$}\ cluster with a power-law dependency on mass. Several nearby galaxies were investigated \citep{boutloukos03, scheepmaker09} and show a significant spread in $t_4^\mathrm{dis}$ from 40\,Myr for the inner parts of M51 to 8\,Gyr for the Small Magellanic Cloud. \citet{chandar10} used a cluster distribution function (CDF) $g(M, \tau) \propto M^{\alpha} \tau^{\gamma}$ depending on cluster mass $M$ and time $\tau$ to analyze the clusters in the Large Magellanic Cloud. They determined an age exponent $\gamma = -0.8$, while \citet{gieles08} found a flat distribution with no significant age dependency. The distributions of cluster complexes in ten nearby, grand-design spiral galaxies were studied by \citet{grosbol12} using near-infrared (NIR) colors. They found observational evidence for a fast reduction of the extinction in young clusters because their colors were clearly separated from those of the older clusters with lower reddening. In the current paper, we analyze the NIR color-magnitude distributions of cluster complexes in several nearby spirals to estimate parameters for the dust expulsion phase and the dissolution of young clusters. The data and models used to reproduce them are described in the two following sections. The fitting procedure and general behavior of the main model parameters are given in section~\ref{sec:behavior}, while results and conclusions are provided in the last section. \section{Data} \label{sec:data} We selected six of the grand-design spirals of the study by \citet{grosbol12}, for which more than 2000 cluster complexes were identified. The galaxies were observed in the NIR JH{K$_\mathrm{s}$}-bands with HAWK-I at the Very Large Telescope and are listed in Table~\ref{tbl:galaxies} together with their assumed distances estimated from their systemic velocity relative to the 3K cosmic microwave background using a Hubble constant of 73 km~s$^{-1}$~Mpc$^{-1}$. The average seeing on the {K$_\mathrm{s}$}-maps was around 0\farcs4, which yields a linear resolution in the range of 20-40~pc. This is not enough to distinguish individual clusters, therefore many of the sources detected are likely to be cluster complexes. The total number of sources N$_s$ detected on the {K$_\mathrm{s}$}-images and the limiting magnitude K$^l$ for a 90\% completeness level are provided in Table~\ref{tbl:galaxies}. \begin{table} \caption[]{List of galaxies. Name, adopted distance D in Mpc, and limiting magnitudes K$^l$ for a 90\% completeness level are listed. The total number of sources N$_s$ for which aperture photometry could be obtained is given as well as the absolute magnitude limit M$_K^l$ and the corresponding number of non-stellar objects N$_c$. Finally, the CDF exponent $\alpha$ derived for young clusters is listed. } \label{tbl:galaxies} \begin{tabular}{lrrrrrr} \hline\hline Galaxy & \multicolumn{1}{c}{D} & \multicolumn{1}{c}{K$^l$} & \multicolumn{1}{c}{N$_s$} & \multicolumn{1}{c}{M$_K^l$} & \multicolumn{1}{c}{N$_c$} & \multicolumn{1}{c}{$\alpha$} \\ \hline \object{NGC\,157} & 18.0 & 20.2 & 2254 & -11.1 & 569 & -1.62 \\ \object{NGC\,1232} & 19.8 & 20.6 & 3177 & -10.9 & 927 & -2.37 \\ \object{NGC\,1365} & 21.1 & 20.2 & 2417 & -11.5 & 827 & -1.97 \\ \object{NGC\,2997} & 19.2 & 20.1 & 5313 & -11.3 & 1757 & -2.29 \\ \object{NGC\,5247} & 22.6 & 19.8 & 2259 & -12.0 & 785 & -2.19 \\ \object{NGC\,7424} & 9.5 & 20.8 & 6137 & -9.7 & 1212 & -1.76 \\ \hline \end{tabular} \end{table} A typical distribution of clusters in a color-color diagram (CCD) is shown for NGC\,2997 in Fig.~\ref{fig:cc2997}, where cluster evolutionary tracks (CET) for single-burst stellar population (SSP) models from Padova \citep{marigo08} and Starburst99 \citep[ hereafter SB99]{leitherer99, vazques05} are plotted for reference. The main difference between the two sets of CETs is the inclusion of nebular emissions in the SB99 models. Reddening vectors for a visual extinction $A_V$ = 5{$^\mathrm{m}$}\ are also indicated for a standard Galactic `screen' model \citep{indebetouw05} and a 'dusty' environment \citep{witt92, israel98}. Two groups can be distinguished, a densely populated group close to the old end of the CETs and one around (0\fm8, 1\fm1), which is composed of young clusters, strongly attenuated by dust. These groups can also be seen on a color-magnitude diagram (CMD) as two separate branches (see Fig.~\ref{fig:cm2997}), which suggests that clusters suffer a fast reduction of extinction at an early evolutionary phase. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{aa21249-f1.pdf}} \caption[]{(H-K)--(J-H) diagram of non-stellar sources in NGC\,2997 with photometric errors $<$0\fm05. The magnitudes are indicated by color from blue (brighter) to red. Cluster evolutionary tracks are drawn for the Padova and SB99 SSP models. Reddening vectors for screen and dusty models are also shown. } \label{fig:cc2997} \end{figure} These two distinct groups of cluster complexes can be identified in all six spirals. Whereas their absolute colors will depend on the detailed parameters of the underlying stellar population (e.g., initial mass function (IMF), and metallicity), the relative positions of the two groups are more directly determined by their early history, such as amount of extinction and evolutionary time scale. The clusters were separated into two groups by applying a k-means clustering algorithm \citep{macqueen67} to ensure an objective procedure. The grouping is shown in Figs.~\ref{fig:cc2997} and \ref{fig:cm2997} with different symbols. The older group contains 60-70\% of the clusters and their color difference is $\Delta$(J-K) $\approx$ 0\fm7. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{aa21249-f2.pdf}} \caption[]{(J-K)--{M$_\mathrm{K}$}\ diagram for non-stellar objects with photometric errors $<$0\fm1 in NGC\,2997. Evolutionary tracks for clusters with a mass of $5\times10^5$\,{M$_{\sun}$}\ and a 'screen' reddening vector are shown. The symbols indicate the k-means clustering number while colors display the reddening-corrected color index Q$_s$ = (H-K) - 0.564$\times$(J-H).} \label{fig:cm2997} \end{figure} \section{Models} \label{sec:models} A simple model was created to fit the NIR CMDs of stellar clusters as observed in the galaxies. The foundation of the model was a CDF $g(M, \tau) \propto M^{\alpha} \tau^{\gamma}$ with a power-law dependence on cluster final mass $M$ and age $\tau$ \citep{chandar10}. A constant formation rate was assumed for the galaxies. The actual mass $M_a$ of a single cluster was computed assuming that it had a star formation rate SFR = $M/\tau_s$ till $\tau_s$, after which star formation terminated. The intrinsic NIR magnitudes of the clusters were obtained using their actual masses and ages to interpolate in the SSP models from SB99 (v6.0.2). The age spread within the youngest clusters was taken into account by applying appropriate weights. A Kroupa IMF \citep{kroupa01} was used for the individual clusters. The upper limits of stellar masses $M_u$ for the IMF was varied in the range 30-120\,{M$_{\sun}$}. To estimate the importance of nebular emission for the intrinsic colors for young clusters, models with and without emission were computed. The general distribution in the CCDs indicates that attenuation by dust is very important at early stages, while later it becomes small. This was modeled by applying an extinction $A_V^x$ up to an age $\tau_x$ after which it decreased linearly to $A_V^o$ over a time $d\tau$. A simple scenario, where supernovae expel gas and dust from the cluster and thereby stop star formation, suggests that $\tau_x\le\tau_s<\tau_x+d\tau$. In the more general case where hot stars also erode nearby dust and molecular clouds \citep{whitworth79}, $\tau_x$ and $\tau_s$ may be similar or even reversed. The wavelength dependency of extinction was assumed to follow a power-law $A_V(\lambda) \propto \lambda^{-\beta}$ \citep{martin90}. A Galactic screen model for extinction \citep{indebetouw05} is reproduced by $\beta \sim 1.8$, while 1.3 gives R$_\mathrm{V}$ = 3.0 \citep{turner89}. Exponents around 0.5 yield values of E(H-K)/E(J-H) typical for integrated light from a dusty, star-forming environment \citep{witt92, israel98}. Finally, Gaussian errors were added to intrinsic colors and extinction applied to simulate observational errors and spread in cluster initial conditions. The model code was written in Python using the {\it scipy} package to generate and analyze the CDFs. The clusters were created in the age range $1$-$10^3$\,Myr to cover the observed CMDs. Their final masses were in the range of $10^4$-$10^7${M$_{\sun}$}\ except for NGC\,7424, for which a ten times lower mass range was used to match its fainter clusters. The model CDF constituted the empirical probability function to which the observed CDFs were compared with a Kolmogorov-Smirnov (KS) test. A typical model contained $10^6$ clusters to ensure that all parts of the distributions were well populated. This number was increased, if needed, so that the number of simulated clusters brighter than the {M$_\mathrm{K}$}-limit applied was at least five times larger than the observed population to ensure a smaller statistical fluctuation in the reference distribution. \begin{table*} \caption[]{Parameters for the best models. The visual extinctions $A_V^o$ and $A_V^x$ are given in magnitudes, the times $\tau_s$, $\tau_x$, and $d\tau$ are given in log(yr). } \label{tbl:res1} \begin{tabular}{lrrrrrrrrrr} \hline\hline & & \multicolumn{5}{c}{CMD fit} & \multicolumn{4}{c}{CCD fit} \\ Galaxy & \multicolumn{1}{c}{M$_u$} & \multicolumn{1}{c}{$\gamma$} & \multicolumn{1}{c}{$A_V^o$} & \multicolumn{1}{c}{$A_V^x$} & \multicolumn{1}{c}{$\tau_s$} & \multicolumn{1}{c}{\^{D}$_{cmd}$} & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$\tau_x$} & \multicolumn{1}{c}{$d\tau$} & \multicolumn{1}{c}{\^{D}$_{ccd}$} \\ \hline NGC~157 & 60 & -1.1 & 3.4 & 10.6 & 7.0 & 1.11 & 2.1 & 7.1 & 6.4 & 2.08 \\ NGC~157 & 100 & -1.2 & 3.1 & 8.6 & 6.9 & 1.09 & 1.7 & 6.7 & 6.8 & 1.66 \\ NGC~157 & 120 & -1.0 & 3.5 & 8.7 & 6.8 & 0.67 & 1.7 & 6.5 & 6.7 & 1.51 \\ \hline NGC~1232 & 60 & -1.9 & 3.0 & 10.3 & 7.0 & 1.68 & 2.2 & 7.2 & 6.1 & 2.40 \\ NGC~1232 & 100 & -1.9 & 3.3 & 11.6 & 6.9 & 1.12 & 2.5 & 6.3 & 7.1 & 1.86 \\ NGC~1232 & 120 & -1.7 & 3.4 & 10.4 & 6.9 & 1.44 & 2.2 & 6.2 & 7.0 & 1.79 \\ \hline NGC~1365 & 60 & -1.6 & 3.9 & 10.9 & 6.9 & 1.35 & 2.3 & 7.0 & 6.1 & 2.26 \\ NGC~1365 & 100 & -1.5 & 4.0 & 10.4 & 6.7 & 0.79 & 2.4 & 6.1 & 6.5 & 1.76 \\ NGC~1365 & 120 & -1.1 & 3.8 & 10.2 & 6.7 & 0.65 & 2.2 & 6.1 & 6.6 & 1.80 \\ \hline NGC~2997 & 60 & -2.4 & 4.1 & 9.2 & 6.8 & 0.86 & 1.6 & 6.0 & 6.9 & 2.21 \\ NGC~2997 & 100 & -1.9 & 3.8 & 10.1 & 6.7 & 0.70 & 1.5 & 6.3 & 6.4 & 2.38 \\ NGC~2997 & 120 & -1.0 & 4.0 & 11.0 & 6.7 & 0.92 & 2.0 & 6.1 & 7.0 & 2.96 \\ \hline NGC~5247 & 60 & -1.5 & 3.9 & 11.3 & 7.5 & 2.13 & 1.8 & 7.4 & 6.8 & 2.39 \\ NGC~5247 & 100 & -1.5 & 4.2 & 11.7 & 7.0 & 1.16 & 2.2 & 6.9 & 7.0 & 1.63 \\ NGC~5247 & 120 & -1.8 & 3.0 & 11.2 & 7.0 & 1.09 & 1.9 & 6.0 & 7.2 & 1.34 \\ \hline NGC~7424 & 60 & -1.2 & 3.3 & 11.1 & 7.0 & 1.65 & 2.1 & 7.0 & 6.5 & 3.66 \\ NGC~7424 & 100 & -0.9 & 3.5 & 11.3 & 6.7 & 0.76 & 2.0 & 6.2 & 6.8 & 2.85 \\ NGC~7424 & 120 & -0.9 & 3.9 & 10.3 & 6.6 & 2.27 & 2.2 & 6.4 & 6.5 & 2.77 \\ \hline \end{tabular} \end{table*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{aa21249-f3.pdf}} \caption[]{Logarithmic contours of the color distribution of stellar clusters in NGC\,2997. a) Observed cluster distribution, b) model with a KS minimum for M$_u$ = 120\,{M$_{\sun}$}, while c,d,e, and f show a variation of the parameters $\tau_x$ and $d\tau$. } \label{fig:model} \end{figure} \section{Fitting procedure and results} \label{sec:behavior} The JHK colors of the model clusters depend in a complex way on the eight model parameters and the adopted CET. A partial fitting procedure to minimize the KS test statistics was applied for each CET because different features of the distributions were more sensitive to some parameters than to others. First, the extinctions $A_V^o$ and $A_V^x$ were estimated by fitting the (J-K) values of the old and young branches in the CMD while their population ratio gave a preliminary value of $\tau_s$. The magnitude distribution and the relative importance of the two branches in the CMD are mainly depending on $\alpha$, $\gamma$, $A_V^x$, $A_V^o$, and $\tau_s$. Due to the limited age information available for the old clusters, $\alpha$ and $\gamma$ could not be separated. Thus, $\alpha$ was fixed to the value derived for young clusters by \citet{grosbol12} as listed in Table~\ref{tbl:galaxies}. Only clusters one magnitude brighter than M$_K^l$ were considered for the fits to avoid any bias due to incompleteness of faint clusters. The relative colors of the two groups are sensitive to the parameters $\beta$, $\tau_x$, and $d\tau$, which were estimated from the CCD. The lowest value of the KS test statistics was estimated first by following the steepest gradient using 0\fm1 and 0\fm5 bins for color indexes and magnitude, respectively. Because the test function had an uneven surface and could have several local minima, a grid of the parameter values was computed around the minimum found by the gradient search to ensure that the deepest minimum was located. Although the model with the KS minimum represents the `best' fit, it depends mainly on the more populated group of older clusters and may not reproduce the colors of the young clusters accurately. Models with $\beta<1.0$ or without nebular emission were also computed but had in all cases significantly higher KS values. The parameter values for the three best models are listed in Table~\ref{tbl:res1} including the level of significance expressed as the population-corrected critical value \^{D}, where 1.22 corresponds to a 10\% level. The effects of statistical fluctuation cannot be entirely neglected even with samples of $10^6$ clusters because the high-luminosity tail of the distributions is significant. The statistical variation was estimated by computing five identical models with different seeds for the random number generator. These tests indicate that the parameters may fluctuate by 10\% due to Monte Carlo sampling effects. The cluster distribution in the CCD is illustrated in Fig.~\ref{fig:model} with logarithmic contours for both the observed cluster population of NGC\,2997 and the corresponding model for M$_u$ = 120\,{M$_{\sun}$}. Although the bi-modal distribution is clearly visible, a significant amount of model clusters with (H-K) $>$ 0\fm9 does not agree with the observations. A smaller amount of nebular emission (e.g., due to a clumpy interstellar medium) would move the young clusters closer to the location observed. The effects from changing $\tau_x$ to 2 and 15\,Myr are displayed in Figs.~\ref{fig:model}c-d, while Figs.~\ref{fig:model}e-f give a similar variation of $d\tau$. High values of $\tau_x$ yield an extra peak in the color distribution with (J-H) = 1\fm4, whereas too many clusters with (J-H) $<$ 0\fm6 are observed for low values of $d\tau$. The observed color distribution can best be modeled by an early start of the decrease in cluster extinction. The decline is likely to last for about 10\,Myr. Most models display a bridge between the young and old groups that curves toward low (J-H) values and reflects the shape of the CET. The selection of a better CET requires detailed spectroscopic information and is beyond the scope of the current paper. \section{Discussion and conclusion} \label{sec:conclusions} The models suggest that nebular emission is significant and must be included to account for the NIR colors of young clusters. A reddening law with $\beta$ around 1.8 or slightly above provides better fits than lower exponents, but a smaller amount of nebular emission assumed for the CETs (e.g. due to a clumpy medium) would favor lower values of $\beta$. A high mass limit M$_u$ in the range of 100-120\,{M$_{\sun}$}\ is preferred in most cases. It is mainly constrained by the color of the young clusters. The initial value $A_V^x$ of the average extinction for the cluster complexes lies in the range of 8-11{$^\mathrm{m}$}, while the final extinction $A_V^o$ is about 3-4{$^\mathrm{m}$}. The latter value is close to zero if the Padova CETs are used (see Fig.~\ref{fig:cm2997}). These values are consistent with the sources being complexes of young, highly obscured clusters as seen by \citet{chene13}. The linear resolution could also play a role, although no clear trend is apparent. All fits suggest a higher mortality of the young, massive complexes with $\gamma$ = -1.4$\pm$0.5 than was found by \citet{fall12} with $\gamma$ = -0.8 for a number of different types of galaxies. This indicates that the mortality for very massive complexes is higher than for individual clusters; that is, young complexes may disintegrate into smaller clusters, which are below the limiting magnitude of the current study. The duration of the continuous star formation phase $\tau_s$ is at least 5\,Myr for all galaxies. In general, the extinction starts to decrease before the star formation ceases, indicated by $\tau_x < \tau_s$. Simulations with the extreme values of 2 and 15 Myr for $\tau_x$ and $d\tau$ favor shorter $\tau_x$ and longer $d\tau$. This suggests a star formation scenario where high-mass cluster complexes (i.e., $M > 10^4$\,{M$_{\sun}$}) form stars during an extended period of several Myr. The reduction of the internal extinction starts before the star formation terminates. The time scales suggest that the expulsion of dust is initiated by the first supernovae of heavy stars. Owing to the large mass of the cluster complexes, the first supernovae may not be able to disrupt the giant molecular cloud (GMC) and star formation continues for some time until enough supernovae have exploded to destroy the GMC. A fragmentation of the initial GMC into individual star-forming regions with slightly different evolution time scales would also yield a simultaneous reduction of absorption and star formation in the complexes. \begin{acknowledgements} We thank an anonymous referee for helpful comments. HD thanks the Brazilian Council of Research CNPq, Brazil, for support. \end{acknowledgements} \bibliographystyle{aa}
1302.6530
\section{Introduction} Neutron star (NS) merger events are among the most promising candidates for the first direct measurement of a gravitational-wave signal with the upcoming Advanced LIGO and VIRGO interferometric instruments~\citep{2006CQGra..23S.635A,2010CQGra..27h4006H}, and they are considered as likely origin of short gamma-ray bursts and their afterglows as a consequence of ultrarelativistic, collimated outflows~\citep[see e.g.][]{2007PhR...442..166N,2011ApJ...734...96K}. Moreover, they are possible sources of different kinds of electromagnetic signals in the precursor of the merging and in its aftermath as a consequence of magnetohydrodynamical effects, magnetospheric interactions, relativistic matter outflows, or NS crust phenomena~\citep{1996A&A...312..937L,1996ApJ...471L..95V,1998ApJ...507L..59L,2001MNRAS.322..695H,2010ApJ...723.1711T,2011ApJ...734L..36S,2011Natur.478...82N,2012PhRvL.108a1102T,2012arXiv1209.5747K,2013ApJ...763L..22Z,2013arXiv1301.0439G,2012ApJ...755...80P,2012ApJ...757L...3L,2013arXiv1301.7074P,2012ApJ...746...48M}. Thermal emission produced by hot ejecta gas, for example, may cause potentially observable optical transients~\citep{1998ApJ...507L..59L,2005astro.ph.10256K,2010MNRAS.406.2650M,2012ApJ...746...48M}, and the interaction of the ejecta cloud with the circumstellar medium is expected to create radio flares that might be detectable for periods of years~\citep{2011Natur.478...82N,2012arXiv1204.6242P,2012arXiv1204.6240R}. Observations of such signals could help to pinpoint the exact celestial locations of NS mergers (thus, e.g., supporting the analysis of data taken by gravitational-wave detectors), and repeated measurements of signals that can be unambiguously linked to NS mergers would help to constrain the still highly uncertain rate of such events in the local universe. During the merging of two NSs a small fraction of the system mass, typically 0.1--1 per cent, can become gravitationally unbound and can be ejected on the dynamical timescale of milliseconds~\citep{1997A&A...319..122R,1999A&A...341..499R,2000A&A...360..171R,2001A&A...380..544R,2007A&A...467..395O,2012arXiv1204.6240R,2012arXiv1204.6242P,2012arXiv1210.6549R,2012arXiv1212.0905H}. Because such material is likely to possess a high neutron excess, it has been proposed as a possible site for the creation of the heaviest, neutron-rich elements, which are formed by the rapid neutron capture process (r-process)~\citep{1977ApJ...213..225L,1989Natur.340..126E} (similarly, also NS-black hole mergers were suggested as sources of r-process matter~\citep{1974ApJ...192L.145L,1976ApJ...210..549L}). The radioactive decay of these freshly synthesized r-process nuclei should heat the ejecta and thus lead to an optical transient~\citep{1998ApJ...507L..59L,2005astro.ph.10256K,2010MNRAS.406.2650M,2011ApJ...736L..21R,2011ApJ...738L..32G}. The properties of such events depend on the fraction of the material that can be converted to radioactive species. Moreover, the peak luminosity, the timescale to reach the emission peak, and the effective temperature at the radiation maximum, as well as the radio brightness that accompanies the deceleration of the expelled gas during its coasting in the stellar environment, depend sensitively on the ejecta mass and expansion velocity. Detailed hydrodynamical merger models are needed to calculate these quantities and to determine the nucleosynthesis conditions in the unbound material. Concerning their role as sources of heavy elements binary NSs collisions have recently moved into the focus of interest because the astrophysical sources of the r-process elements have not been identified yet and core-collapse supernova simulations continue to be unable to yield the extreme conditions for forming the heaviest neutron-rich nuclei~\citep{2008ApJ...676L.127H,2008A&A...485..199J,2010ApJ...722..954R,2010PhRvL.104y1101H,2010A&A...517A..80F,2011ApJ...726L..15W,2011PhRvC..83d5809A}. (For reviews on r-process nucleosynthesis and an overview of potential sites, ~\citep[see e.g.][]{2007PhR...450...97A,2011PrPNP..66..346T,2011PhRvL.106t1104B,2012ApJ...750L..22W}.) In contrast to the situation for supernovae, investigations with growing sophistication have confirmed NS merger ejecta as viable sites for strong r-processing~\citep{1999ApJ...525L.121F,2005NuPhA.758..587G,2007PhR...450...97A,2010MNRAS.406.2650M,2011ApJ...736L..21R,2011ApJ...738L..32G,2012arXiv1206.2379K}. However, despite this promising situation a variety of aspects need to be clarified before the question can be answered whether NS mergers are a major source or even the dominant source of heavy r-process elements. On the one hand the merger rate and its evolution during the Galactic history are still subject to considerable uncertainties (see e.g.,~\citet{2010CQGra..27q3001A} for a compilation of recent estimates), and it is unclear whether NS mergers can explain the early enrichment of the Galaxy by r-process elements as observed in metal-deficient stars~\citep{2004A&A...416..997A}. On the other hand it remains to be determined how much mass is ejected in merger events depending on the binary parameters and, in particular, depending on the incompletely known properties of the equation of state (EoS) of NS matter. It also needs to be understood which fraction of the ejecta is robustly converted to r-process material and whether the final abundances are always compatible with the solar element distribution, which agrees amazingly well with the r-process abundance pattern in metal-poor stars for atomic numbers $Z~\sim 55$--90~\citep[see e.g.][]{2008ARA&A..46..241S}. Newtonian as well as relativistic studies showed that the mass ratio has a significant effect on the amount of matter that can become unbound~\citep{1999ApJ...527L..39J,1999A&A...341..499R,2000A&A...360..171R,2001A&A...380..544R,2007A&A...467..395O,2011ApJ...736L..21R,2011ApJ...738L..32G,2012arXiv1204.6242P,2012arXiv1204.6240R,2012arXiv1206.2379K,2012arXiv1210.6549R,2012arXiv1212.0905H}. Such investigations, however, were performed only with a few exemplary models for high-density matter in NSs~\citep{2000A&A...360..171R,2007A&A...467..395O,2011ApJ...738L..32G,2012arXiv1212.0905H} or even only with a single NS EoS~\citep{2011ApJ...736L..21R,2012arXiv1204.6242P,2012arXiv1204.6240R,2012arXiv1206.2379K,2012arXiv1210.6549R}, although the importance of the nuclear EoS for a quantitative assessment of the dynamical mass ejection can be concluded from published calculations~\citep[e.g.][]{2011ApJ...738L..32G}. These calculations, however, also suggest that the nuclear abundance pattern produced by r-processing in the ejecta may be largely insensitive to variations of the conditions in the ejecta. It is important to note that quantitatively reliable information on the ejecta masses and their dependence on the binary and EoS properties require general relativistic (GR) simulations. Newtonian results in the literature~\citep{1999A&A...341..499R,1999ApJ...527L..39J,2001A&A...380..544R,2011ApJ...736L..21R,2012arXiv1206.2379K,2012arXiv1204.6240R,2012arXiv1204.6242P,2012arXiv1210.6549R} exbibit significant quantitative and qualitative differences compared to relativistic models~\citep{2007A&A...467..395O,2011ApJ...738L..32G,2012arXiv1212.0905H}. Newtonian calculations tend to overestimate the ejecta masses in general~\citep{1999A&A...341..499R,1999ApJ...527L..39J,2001A&A...380..544R,2011ApJ...736L..21R,2012arXiv1206.2379K,2012arXiv1204.6240R,2012arXiv1204.6242P,2012arXiv1210.6549R}. This can be understood because of several facts. First, the structure of NSs in GR is considerably more compact than that of Newtonian stars. For instance, a NS with a gravitational mass of 1.35\,$M_{\odot}$ described by the LS220 EoS~\citep{1991NuPhA.535..331L} possesses a circumferential radius of 12.6\,km, whereas the corresponding Newtonian star has 14.5\,km. Second, GR gravity is stronger and the merging of two NSs is therefore more violent. The difference can be expressed in terms of the gravitational binding energy of a nucleon on the surface of the considered 1.35\,$M_{\odot}$ NSs, which is $\sim$200\,MeV in the GR case compared to only $\sim$130\,MeV for the Newtonian model. Third, GR forces merger remnants beyond a mass limit to collapse to black holes on a dynamical timescale. Such an effect cannot be tracked by Newtonian models. These differences are of direct relevance for the collision dynamics and the possibility to unbind matter from the inner and outer crust regions of the merging NSs. It is the purpose of this paper to explore the influence of the high-density EoS on the ejecta properties in a systematic way, i.e., we will determine ejecta masses and the nucleosynthesis outcome for a large set of NS matter models, applying them in relativistic NS merger simulations. Most of these EoSs were already employed in our previous works~\citep{2012PhRvL.108a1101B,2012PhRvD..86f3001B}. They were chosen such that they provide as completely as possible a coverage of the possibilities for NS properties (expressed by corresponding mass-radius-relations) which are compatible with present observational constraints (e.g., the 1.97\,$M_\odot$ NS discovery of~\citet{2010Natur.467.1081D}) and theoretical understanding~\citep{2010arXiv1012.3208L,2007PhR...442..109L,2010ApJ...722...33S,2010PhRvL.105p1102H}. In our study we will focus on symmetric 1.35-1.35\,$M_{\odot}$ systems and will compare them with asymmetric 1.2-1.5\,$M_{\odot}$ mergers. Because population synthesis models~\citep{2008ApJ...680L.129B} and pulsar observations~\citep{1999ApJ...512..288T,2011A&A...527A..83Z} suggest that the double NS population is strongly dominated by systems of nearly equal-mass stars of about 1.35\,$M_{\odot}$ each, the average NS merger event can be well represented by a 1.35-1.35\,$M_{\odot}$ configuration, and a clarification of the EoS dependence of ejecta masses, r-process yields, and properties of electromagnetic counterparts of NS mergers seems to be more important than a wide variation of binary parameters. Nevertheless, we will also present results of a more extended survey of binary mass ratios and total masses for some representative EoSs. In our work we will exclusively concentrate on NS-NS mergers, but the discussed phenomena should play a role also for NS-black hole coalescence~\citep{1999ApJ...527L..39J,2000MNRAS.318..606L,2004MNRAS.351.1121R,2005ApJ...634.1202R,2006PhRvD..73b4012F,2012arXiv1212.4810F,2012arXiv1210.6549R,2012arXiv1204.6242P,2012arXiv1204.6240R} and eccentric NS mergers~\citep{2012PhRvD..85l4009E,2012ApJ...760L...4E,2012arXiv1204.6240R,2012arXiv1210.6549R}. However, while the existence of double NS systems is established by observations, progenitors of NS-black hole and eccentric NS mergers have not been observed yet and the rates of such types of events are even more uncertain than those of coalescing binary NSs. In investigating the latter we will only consider the phase of dynamical mass ejection between about the time when the two NSs collide until a few milliseconds later. During this phase hydrodynamical and tidal forces (shock compression, pressure forces, gravitational interaction) are responsible for the mass shedding of the merging objects. Once the remnant has formed, however, differential rotation is expected to strongly amplify the magnetic fields~\citep[e.g.][]{2006Sci...312..719P,2008PhRvD..77b4006A,2008PhRvD..78b4012L,2011PhRvD..83d4014G} and viscous energy dissipation is likely to provide additional heating, enhancing the neutrino emission that accompanies the secular evolution of the post-merger configuration~\citep{1999A&A...344..573R,2004MNRAS.352..753S,2009ApJ...690.1681D,2012PThPh.127..535S}. As a consequence the merger remnant will experience mass loss due to neutrino energy deposition in the near-surface regions~\citep{1999A&A...344..573R,2004MNRAS.352..753S,2009ApJ...690.1681D,2012ApJ...746..180W,2012PThPh.127..535S} (similar to the neutrino-driven wind of proto-neutron stars emerging from stellar core collapse) and due to magnetohydrodynamical outflows. Both mechanisms will add ejecta to the mass stripped during the dynamical interaction of the system components, but the details of the secular evolution and the associated mass loss will be very sensitive to the EoS-dependent stability properties of the merger remnant, i.e., to the question whether the remnant is a hypermassive NS (see~\citet{2000ApJ...528L..29B} for a definition) or whether and when it collapses to a black hole-torus system. These questions lie beyond the scope of the present work. Our paper is organized as follows. In Sect.~\ref{sec:code} a brief summary of the numerical methods and microphysics ingredients of our NS merger simulations is given. In Sect.~\ref{sec:masses} we present our results for the relation between dynamical mass loss and NS (EoS) properties, provide a detailed description of the mass-loss dynamics in our relativistic models (drawing comparisons to Newtonian results), discuss the influence of an approximate treatment of thermal effects in the EoS, and evaluate the mass ejection for three selected, representative EoSs in merger simulations for a wider space of binary masses and mass ratios in order to determine the population-integrated mass loss. In Sect.~\ref{sec:nucleo} we describe results of nuclear network calculations performed for a subset of our merger models and draw conclusions on the Galactic merger rate and the production of long-lived radioactive species ($^{232}$Th, $^{235}$U, $^{238}$U) used for stellar nucleocosmochronometry. Finally, we present values for the heating efficiency of the merger ejecta by radioactive decays of the nucleosynthesis products and apply them in Sect.~\ref{sec:opttrans} to estimate the properties (peak luminosity, peak timescale, effective temperature at the maximum luminosity) of the optical transients that can be expected from the expanding merger debris. We also briefly discuss the implications of our simulations for radio flares. Finally, a summary and conclusions follow in Sect~\ref{sec:sum}. \section{Numerical model and equations of state} \label{sec:code} The simulations of our study are performed with a relativistic Smooth Particle Hydrodynamics (SPH) code, i.e. the hydrodynamical equations are evolved in a Lagrangian manner~\citep{2002PhRvD..65j3005O,2007A&A...467..395O,2010PhRvD..82h4043B}. The Einstein field equations are solved imposing conformal flatness of the spatial metric~\citep{1980grg..conf...23I,1996PhRvD..54.1317W}, and a gravitational-wave backreaction scheme is used to account for energy and angular momentum losses by the emission of gravitational radiation~\citep{2007A&A...467..395O}. The code evolves the conserved rest-mass density $\rho^{*}$, the conserved specific momentum $\tilde{u}_i$, and the conserved energy density $\tau$, whose definitions evolve the metric potentials and the ``primitive'' hydrodynamical quantities, i.e. the rest-mass density $\rho$, the coordinate velocity $v_i$, and the specific internal energy $\epsilon$. The system of relativistic hydrodynamical equations is closed by an EoS, which relates the pressure $P=P(\rho,T,Y_{\mathrm{e}})$ and the specific internal energy $\epsilon=\epsilon(\rho,T,Y_{\mathrm{e}})$ to the rest-mass density $\rho$, the temperature $T$ and the electron fraction $Y_{\mathrm{e}}$. The temperature is obtained by inverting the specific internal energy $\epsilon=\epsilon(\rho,T,Y_{\mathrm{e}})$ for given $\rho$ and $Y_{\mathrm{e}}$. Changes of the electron fraction are assumed to be slow compared to the dynamics~\citep[see e.g.][]{1997A&A...319..122R}, and the initial electron fraction, which is defined by the neutrinoless beta-equilibrium of cold NSs, is advected according to $\frac{\mathrm{d} Y_{\mathrm{e}}}{\mathrm{d}t}=0$ ($\frac{d}{dt}$ defines the Lagrangian, i.e. comoving, time derivative). The EoS of NS matter is only incompletely known and numerical studies rely on theoretical prescriptions of high-density matter. This work surveys a representative sample of 40 microphysical EoSs, which have been derived within different theoretical frameworks and make different assumptions about the composition of high-density matter and the description of nuclear interactions. Most of the employed EoSs are listed in~\citet{2012PhRvD..86f3001B}, where details can be found, while some new models are introduced below. Because of the one-to-one correspondence between the EoS and the mass-radius relation of nonrotating NSs, it is convenient to characterize EoSs by the resulting stellar properties. Stellar quantities as integral properties of an EoS are in particular useful to classify the dynamics of NS mergers and the accompanying gravitational-wave signals~\citep{2012PhRvL.108a1101B,2012PhRvD..86f3001B}. For this reason we will adopt the same approach also for this investigation. Considering for instance NSs with a gravitational mass of 1.35~$M_{\odot}$, the stellar radii $R_{1.35}$ vary from 10.13~km to 15.74~km for the different EoSs of our sample. The maximum mass $M_{\mathrm{max}}$ of nonrotating NSs obtained for these EoSs ranges from 1.79~$M_{\odot}$ to 3.00~$M_{\odot}$. In terms of their stellar properties the employed EoSs of our study show a large variation (see the mass-radius relations in Fig.~4 of~\citet{2012PhRvD..86f3001B}). Note that we do not apply any selection procedure for choosing the EoSs, except that we require a maximum mass above $\approx 1.8~M_{\odot}$. This limit is chosen because of the firm discovery of a pulsar with a gravitational mass of $(1.97\pm 0.04)~M_{\odot}$~\citep{2010Natur.467.1081D}. EoSs which yield a maximum mass below this limit are practically excluded by this observation. Nevertheless, we accept them (at least down to $M_{\mathrm{max}}\approx 1.8~M_{\odot}$) for our investigation because we expect that at densities relevant in a typical NS merger these models still provide a viable description of high-density matter~\citep[see][]{2012PhRvD..86f3001B}. Note that compared to our previous study in~\citet{2012PhRvD..86f3001B} we extend our EoS survey by including also the models TM1, TMA, NL3, DD2, SFHO and SFHX of~\citet{2010NuPhA.837..210H}~\citet{2012ApJ...748...70H} and~\citet{2012arXiv1207.2184S}, relying on the interactions described in~\citet{1994NuPhA.579..557S}~\citet{1995NuPhA.588..357T},~\citet{1997PhRvC..55..540L},~\citet{2010PhRvC..81a5803T} and~\citet{2012arXiv1207.2184S}. Moreover, we include the BSk20 and BSk21 EoSs of~\citet{2010PhRvC..82c5804G}. The maximum masses $M_{\mathrm{max}}$ resulting for these EoSs are 2.21~$M_{\odot}$, 2.02~$M_{\odot}$, 2.79~$M_{\odot}$, 2.42~$M_{\odot}$, 2.06~$M_{\odot}$, 2.13~$M_{\odot}$, 2.16~$M_{\odot}$ and 2.28~$M_{\odot}$ (order as listed above), while the radii of cold 1.35~$M_{\odot}$ NSs are 14.49~km, 13.86~km, 14.75~km, 13.21~km, 11.92~km, 11.98~km, 11.74~km and 12.54~km, respectively. From our sample of EoSs in~\citet{2012PhRvD..86f3001B} we do not consider the SKA EoS (because of its restriction to densities above $1.7\times 10^9~\mathrm{g/cm^3}$) and EoSs which are not compatible with the pulsar observation of~\citet{2010Natur.467.1081D} and directly form a black hole after merging. We also exclude absolutely stable strange quark matter. We refer to~\citet{2009PhRvL.103a1101B} for the particular implications of ejecta from strange quark star mergers. Only 12 out of the considered 40 EoSs describe thermal effects consistently and provide the dependence of thermodynamical quantities on the temperature and the electron fraction. Instead, the majority of models considers matter at zero temperature and in equilibrium with respect to weak interactions (i.e. for beta-equilibrium for neutrino-less conditions). Because temperature effects become important during the merging of the binary components and during the subsequent evolution, we employ an approximate treatment of thermal effects for those EoSs which are given as barotropic relations. This procedure supplements the pressure by an additional ideal-gas component to mimic thermal pressure support, and it requires to choose a corresponding ideal-gas index $\Gamma_{\mathrm{th}}$. Appropriate values for $\Gamma_{\mathrm{th}}$ are in the range of 1.5 to 2 for high-density matter~\citep{2010PhRvD..82h4043B}. The uncertainties connected to the use of this approximate temperature description and the choice of the ideal-gas index were examined in~\citet{2010PhRvD..82h4043B}, where also details about the exact implementation can be found. From population synthesis studies~\citep{2008ApJ...680L.129B} and in agreement with pulsar observations~\citep{1999ApJ...512..288T,2011A&A...527A..83Z} it is expected that binaries with two NSs with gravitational masses of about $M_1\approx M_2\approx 1.35~M_{\odot}$ are the most abundant systems. For this reason we focus in our EoS survey on such equal-mass binaries, albeit we also explore the influence of a system asymmetry by considering 1.2-1.5~$M_{\odot}$ binaries. For a selected subset of EoS models the full range of possible binary parameters is investigated, varying the single component masses from 1.2~$M_{\odot}$ to approximately the maximum mass of NSs. Because of energy and angular momentum losses by gravitational radiation the orbits of NS binaries shrink and the binary components merge after an inspiral period, which lasts roughly 100 to 1000 Myrs for the known systems~\citep{2008LRR....11....8L}. The typical outcome of the coalescence of a 1.35-1.35~$M_{\odot}$ binary system is the formation of a differentially rotating object, potentially a hypermassive NS (i.e. a NS that is more massive than the maximum-mass rigid-rotation configuration and that is stabilized temporarily by differential rotation~\citep{2000ApJ...528L..29B}). The merger remnant is surrounded by an extended halo structure of low-density material. Only four EoSs of our sample lead to the prompt formation of a black hole within about one millisecond after the collision because the remnant can not be supported against the gravitational collapse. For a description of the general dynamics and a more thorough discussion of the collapse behavior we refer to~\citet{2007A&A...467..395O},~\citet{2010PhRvD..82h4043B} and~\citet{2012PhRvD..86f3001B}. In this paper only initially nonrotating NSs are investigated because viscosity is too low to yield tidally locked systems. The stars in NS binaries are therefore expected to rotate slowly in comparison to the orbital angular velocity, justifying the use of an irrotational velocity profile~\citep{1992ApJ...400..175B,1992ApJ...398..234K}. In this study we analyze the material which becomes gravitationally unbound during or right after merging. In order to estimate whether a given fluid element, i.e. an SPH particle, can escape to infinity, we consider \begin{equation}\label{eq:ejrel} \epsilon_{\mathrm{stationary}}=v^i\tilde{u}_i+\frac{\epsilon}{u^0}+\frac{1}{u^0}-1>0 \end{equation} with the coordinate velocity $v^i$, the conserved momentum $\tilde{u}_i$, and the time-component of the eigen-velocity $u^0$ (in geometrical units). This expression can be derived from the hydrodynamical equations by neglecting pressure forces and assuming a stationary metric~\citep{2002PhRvD..65j3005O}. The quantity $\epsilon_{\mathrm{stationary}}$ is conserved $\left( \frac{d \epsilon_{\mathrm{stationary}}}{dt}=0 \right)$ and at infinity it reduces to the Newtonian expression for the total energy of a fluid element. Hence, a particle with $\epsilon_{\mathrm{stationary}}>0$ will be unbound. Equation~\eqref{eq:ejrel} is evaluated in a time-dependent way and SPH particles that fulfill this criterion 10~ms after merging are considered as ultimately gravitationally unbound. Note that our simulations neglect a possible (smaller) contribution to the ejecta by neutrino-driven winds or magnetically driven outflows from the secular evolution of the merger remnant~\citep{2009ApJ...690.1681D,2012ApJ...746..180W}. \section{Ejecta masses}\label{sec:masses} In the following we employ Eq.~\eqref{eq:ejrel} to determine the unbound material in different merger simulations. After a first steep rise of the ejecta mass shortly after the merging of the two NSs, the mass fulfilling the ejecta criterion remains approximately constant (Fig.~\ref{fig:lapse}). A few models, however, show a continuing, slow increase of the ejecta mass also at later times. The ejecta masses discussed below are computed 10~ms after merging. \begin{figure} \includegraphics[width=8.6cm]{f1a.eps} \includegraphics[width=8.6cm]{f1b.eps} \includegraphics[width=8.6cm]{f1c.eps} \caption{\label{fig:lapse}Evolution of the minimum lapse function $\alpha$ (dashed line) and the amount of unbound matter (solid line) for the symmetric 1.35-1.35~$M_{\odot}$ merger with the soft SFHO EoS (top panel), the intermediate DD2 EoS (middle panel), and the stiff NL3 EoS (bottom panel).} \end{figure} In order to determine the influence of the high-density EoS on the ejecta we employ approximately the same numerical resolution of about 350,000 SPH particles for all simulations. By using nonuniform SPH particle masses to model the stellar profile (more massive particles in the high-density core and lighter particles in the outer low-density layers) it is possible to achieve a better resolution of low-density regions. This results in an effective mass resolution of about $2\times 10^{-6}~M_{\odot}$, which is comparable to a SPH simulation of about one million equal-mass particles. The influence of the numerical resolution is investigated by performing additional simulations with higher SPH particle numbers. For the TM1 EoS the ejecta masses are found to be (in solar masses) $1.67\times 10^{-3}, 1.80\times 10^{-3}, 1.71\times 10^{-3}, 2.43\times 10^{-3}$, and $2.07\times 10^{-3}$ for calculations with about $339\times 10^3, 550\times 10^3, 782\times 10^3, 1007\times 10^3$, and $1272\times 10^3$ SPH particles. Determining the unbound matter 6~ms after merging for the APR EoS, we find ejecta masses (in solar masses) of $5.93\times 10^{-3}, 6.08\times 10^{-3}, 6.54\times 10^{-3}, 6.69\times 10^{-3}$, and $6.14\times 10^{-3}$ in simulations with $339\times 10^3, 592\times 10^3, 782\times 10^3, 1007\times 10^3$, and $1272\times 10^3$ SPH particles. Thus, the numerical resolution has an effect on the level of some 10 per cent. This, however, is smaller than the impact of the EoS (see below), which is the focus of this paper. The nonmonotonic variations of the ejecta mass with increasing resolution indicate that statistical fluctuations have some influence on the ejected particle population as well. \subsection{Origin of the ejecta and comparison with other calculations} As can be seen in Fig.~\ref{fig:snap} most of the ejecta originate from the contact interface between the colliding binary components, which get deformed into drop-like shapes prior to the merging. For the 1.35-1.35~$M_{\odot}$ binary the ejecta in the shear interface between the stars are separate into two components, each being fed (nearly) symmetrically by material from both colliding stars (top right panel and middle left panel). The matter in the cusps of the stars essentially keeps its direction of motion towards the companion, whereas the backward part of the contact interface mixes with some of the companion matter (top right panel). Both lumps of ejecta are squeezed out from the contact interface and expand on the retral side of the respective companion star, partially slipping over it (middle panels). The bulk matter of the binary components forms a rotating double-core structure where the two dense cores oscillate against each other (not visible because of the logarithmic density scale; see e.g. the descriptions in~\citet{2010PhRvD..81b4012B} and~\citet{2011MNRAS.418..427S}). A first bunch of the matter which was squeezed out from the contact sheet gets unbound in a first expansion phase of the rotating double core structure, which pushes the ejecta outward. This can be seen in Fig.~\ref{fig:snap} (middle right panel and bottom left panel) and also in the evolution of the minimum lapse function $\alpha$ (Fig.~\ref{fig:lapse}), which is a measure for the compactness of the central object. As the cores separate from each other and the lapse function grows out of its minimum, the ejecta mass increases. A second expansion of the double cores unbinds a smaller amount of matter. Finally, about two milliseconds after the first contact, the triaxial deformation grows into two spiral-arm like extensions reaching out from the central remnant. These expand into the surrounding, low-density halo fed from the contact interface, push it away and unbind additional matter (bottom right panel of Fig.~\ref{fig:snap} and second mass-loss episode visible in Fig.~\ref{fig:lapse}). For different EoSs the different dynamical mechanisms contribute to the ejecta production with different relative strengths. For soft EoSs (top panel of Fig.~\ref{fig:lapse}) the first steep rise of the ejecta mass due to the expanding double core is much more pronounced, whereas for stiff EoSs (bottom panel of Fig.~\ref{fig:lapse}) the first increase of the ejecta is very moderate and the late spiral arms unbind most of the ejecta. This can be seen in Fig.~\ref{fig:lapse} for the SFHO EoS representing a soft EoS, for the NL3 EoS as a stiff example, and the intermediate case of the DD2 EoS. \begin{figure*} \includegraphics[width=8.9cm]{f2a.eps} \includegraphics[width=8.9cm]{f2b.eps} \includegraphics[width=8.9cm]{f2c.eps} \includegraphics[width=8.9cm]{f2d.eps} \includegraphics[width=8.9cm]{f2e.eps} \includegraphics[width=8.9cm]{f2f.eps} \caption{\label{fig:snap}Merger and mass ejection dynamics of the 1.35-1.35~$M_{\odot}$ binary with the DD2 EoS, visualized by the color-coded conserved rest-mass density (logarithmically plotted in $\mathrm{g/cm^3}$) in the equatorial plane. The dots mark SPH particles which represent ultimately gravitationally unbound matter. Their positions are projections of the three-dimensional locations anywhere in the merging stars onto the orbital plane. Black and white indicate the origin from the one or the other NS. For every tenth particle the coordinate velocity is indicated by an arrow with a length proportional to the absolute value of the velocity (the speed of light corresponds to a line length of 50~km). The time is indicated below the color bar of each panel. Note that the side length of the bottom panels is enlarged.} \end{figure*} A significantly smaller fraction of the ejecta (typically below 25 per cent) stems from the outer faces of the merging stars opposite to the contact layer (SPH particles at the outer left and outer right ends of the stellar body in the top right panel of Fig.~\ref{fig:snap}). During merging this matter at the rear of the star (SPH particles with nearly horizontal velocity vectors at the top and bottom merger tails in the middle left panel) lags behind the rotation of the star's center and is hit and ablated by the ``nose'' of the companion shortly after the snapshot shown in the middle left panel (see velocity arrows of particles with opposite color at the tips of the noses). This material gets mostly unbound in the first expansion phase of the oscillating double-core structure. Such type of ejecta is less abundant for stiff EoSs. Different from relativistic simulations, Newtonian models find the ejecta originating mostly from the tips of tidal tails~\citep[see e.g.][]{2012arXiv1206.2379K}, in particular also in the case of symmetric binaries. Relativistic calculations (within the CFC framework)~\citep{2007A&A...467..395O,2011ApJ...738L..32G} yield the dominant ejection from the contact interface as described above (see also the inset of Fig.~1 in~\citet{2011ApJ...738L..32G}). Recently, the fully relativistic simulations of~\citet{2012arXiv1212.0905H} have provided further support for the ejecta origin from the contact interface, confirming the conclusions of~\citet{2007A&A...467..395O} and~\citet{2011ApJ...738L..32G}. This points to qualitative differences between the Newtonian and relativistic mass-loss dynamics with the important difference that in relativistic simulations all of the ejecta are shock-heated while in Newtonian calculations the cold, tidally stripped material dominates. A quantitative comparison between Newtonian~\citep{2012arXiv1206.2379K,2012arXiv1204.6240R,2012arXiv1204.6242P,2012arXiv1210.6549R} and relativistic simulations~\citep{2007A&A...467..395O,2011ApJ...738L..32G} also reveals considerable discrepancies. For instance, simulations of a 1.4-1.4~$M_{\odot}$ merger with the Shen EoS in Newtonian theory produce more than $10^{-2}~M_{\odot}$ ejecta~\citep{2012arXiv1206.2379K,2012arXiv1204.6240R,2012arXiv1204.6242P,2012arXiv1210.6549R}, whereas the relativistic calculations of this study and in~\citet{2007A&A...467..395O} and~\citet{2011ApJ...738L..32G} yield only a few $10^{-3}~M_{\odot}$ of unbound material for the 1.35-1.35~$M_{\odot}$ binary with the same EoS. Comparing the results of our study with the likewise relativistic calculations in~\citet{2012arXiv1212.0905H} shows very good agreement for all four EoSs used in~\citet{2012arXiv1212.0905H}. For example, for the APR EoS with $\Gamma_{\mathrm{th}}=2$ both groups find about $5\times 10^{-3}~M_{\odot}$ of unbound matter. This is remarkable because the implementations differ with respect to the hydrodynamics scheme, which is an SPH (smooth particle hydrodynamics) algorithm here but a grid-based, high-resolution central scheme in~\citet{2012arXiv1212.0905H}. (Note that we employ the conformal flatness approximation whereas the calculations in~\citet{2012arXiv1212.0905H} are conducted within full general relativity.) These findings provide confidence in the results on the quantitative level and point towards fundamental differences between Newtonian and relativistic treatments. Such differences are not unexpected because NSs are more compact in general relativity than in Newtonian gravity. The stronger gravitational attraction prevents the formation of pronounced tidal tails at the outer faces of the colliding stars and increases the strength of the collision. \subsection{Equation of state dependence} Several NS EoSs have been employed in merger simulations by different groups, but a large, systematic investigation of the EoS dependence of the ejecta production is still missing in particular with a consistent description of thermal effects. For a given EoS the radius $R_{1.35}$ of a nonrotating NS with 1.35~$M_{\odot}$ is a characteristic quantity specifying the compactness of NSs. Therefore, we use $R_{1.35}$ to describe the influence of the high-density EoS on the amount of NS merger ejecta. The upper left panel of Fig.~\ref{fig:mejr135} displays the amount of unbound material as a function of $R_{1.35}$ for all 40 EoSs used in our study. Red crosses identify EoSs which provide the full temperature dependence. The black symbols correspond to barotropic zero-temperature EoSs, which are supplemented by a thermal ideal-gas component choosing $\Gamma_{\mathrm{th}}=2$ (see Sect.~\ref{sec:code}). Results based on the same zero-temperature EoS but with $\Gamma_{\mathrm{th}}=1.5$ are given in blue at the same radius $R_{1.35}$. Small symbols indicate results for EoSs which are excluded by the pulsar mass measurement of~\citet{2010Natur.467.1081D}. Circles mark cases which lead to the prompt collapse to a black hole. \begin{figure*} \includegraphics[width=8.9cm]{f3a.eps} \includegraphics[width=8.9cm]{f3b.eps} \includegraphics[width=8.9cm]{f3c.eps} \includegraphics[width=8.9cm]{f3d.eps} \caption{\label{fig:mejr135}Amount of unbound material for 1.35-1.35~$M_{\odot}$ mergers (top left) and 1.2-1.5~$M_{\odot}$ mergers (top right) for different EoSs characterized by the corresponding radius $R_{1.35}$ of a nonrotating NS. Red crosses denote EoSs which include thermal effects consistently, while black (blue) symbols indicate zero-temperature EoSs that are supplemented by a thermal ideal-gas component with $\Gamma_{\mathrm{th}}=2$ ($\Gamma_{\mathrm{th}}=1.5$) (see main text). Small symbols represent EoSs which are incompatible with current NS mass measurements~\citep{2010Natur.467.1081D}. Circles display EoSs which lead to the prompt collapse to a black hole. The lower panels display the sum of the maxima of the coordinate velocities of the mass centers of the two binary components as a function of $R_{1.35}$ for symmetric (bottom left) and asymmetric (bottom right) binaries.} \end{figure*} One can recognize a clear EoS dependence of the ejecta mass, where EoSs with a high compactness of the NSs lead to an enhanced production of unbound material. The ejecta mass can be as big as about 0.01~$M_\odot$ for symmetric mergers with a total binary mass $M_{\mathrm{tot}}=M_1+M_2=2.7~M_{\odot}$. EoSs with relatively large NS radii lead to outflow masses of about 0.001 to 0.002~$M_\odot$. For EoSs with approximately the same $R_{1.35}$ the ejecta masses show a scatter of up to 0.003~$M_{\odot}$. However, considering only EoSs with a fully consistent description of thermal effects (red symbols) the variations are smaller. Only one simplified EoS (eosAU) leads to a prompt collapse of the merger remnant and yields significantly smaller ejecta masses (circles). Using the radius $R_{1.6}$ of a nonrotating NS with 1.6~$M_{\odot}$ or the radius $R_{\mathrm{max}}$ of the maximum-mass Tolman-Oppenheimer-Volkoff solution to characterize an EoS results in diagrams similar to the upper left panel of Fig.~\ref{fig:mejr135}. However, no clear trend can be found for the ejecta mass as a function of the maximum mass of nonrotating NSs. We therefore conclude that the NS compactness is the crucial EoS parameter determining the ejecta mass. Indications of such a behavior were already observed in simulations for four simplified EoSs with an approximate temperature treatment~\citep{2012arXiv1212.0905H}. The dynamics of the merger explain why small NS radii lead to higher ejecta masses. For smaller $R_{1.35}$ the inspiral phase lasts longer and the stars reach higher velocities before they collide. The relation between the impact velocity and the NS radius is clearly seen in the lower left panel of Fig.~\ref{fig:mejr135}, which displays the sum of the maxima of the coordinate velocities of the mass centers of the two binary components. The maximum of the coordinate velocity is reached shortly after the first contact, before the cores of the NSs are decelerated by the collision. The clash of more compact NSs is more violent and more material is squeezed out from the collision interface, for which reason the negative correlation of the velocities with $R_{1.35}$ is reflected by a similar negative correlation of $M_{\mathrm{ejecta}}$ and $R_{1.35}$. Moreover, for smaller $R_{1.35}$ the central remnant consisting of the double cores rotates faster and the bounce and rebounce are stronger, i.e. the surface of the remnant moves faster and pushes away matter more efficiently. \subsection{Influence of the approximate treatment of thermal effects} A number of simulations of our survey (black and blue symbols) as well as calculations by other groups ~\citep[e.g.][]{2011ApJ...736L..21R,2012arXiv1212.0905H} rely on an approximate description of thermal effects in the EoS, which requires the specification of an effective thermal ideal-gas index (see Sect.~\ref{sec:code}). As can be seen in the upper left panel of Fig.~\ref{fig:mejr135}, the choice of the value for this ideal-gas index has a considerable impact on the ejecta mass; the simulations with $\Gamma_{\mathrm{th}}=1.5$ (blue symbols) yield generally more unbound matter. The reason is the reduced thermal pressure support, which means that the two dense cores can approach each other more closely during the collision, which results in a more violent impact and shearing motion and thus in more material being squeezed out from the collision interface and in a more powerful oscillation of the central remnant. This can be clearly seen by following the centers of mass of the two cores or the evolution of the central lapse function. The optimal choice of $\Gamma_{\mathrm{th}}$ is a priori unclear and may be different for different EoSs. To address this issue we performed additional simulations (not shown in Fig.~\ref{fig:mejr135}) for temperature-dependent EoSs (SFHO, DD2, TM1, NL3) after reducing them to the zero-temperature sector (with the constraint of neutrino-less beta-equilibrium) and supplementing them with the approximate description of thermal effects using $\Gamma_{\mathrm{th}}=1.5,~1.8$ and 2. The comparison with the fully consistent simulations reveals that generally a choice of $\Gamma_{\mathrm{th}}=1.5$ yields the best quantitative agreement with only a slight underestimation of about 10 per cent (20 per cent for the SFHO EoS), whereas the ejecta masses with $\Gamma_{\mathrm{th}}=1.8$ or $\Gamma_{\mathrm{th}}=2$ are significantly too low compared to the fully consistent models (for the tested EoSs between 20 to 40 per cent for $\Gamma_{\mathrm{th}}=2$ and between 15 to 35 per cent for $\Gamma_{\mathrm{th}}=1.8$). The fact that a relatively low $\Gamma_{\mathrm{th}}$ reproduces the ejecta properties best contrasts the finding that a higher $\Gamma_{\mathrm{th}}$ (in the range between 1.5 and 2) has turned out to be more suitable for describing gravitational-wave features and the post-merger collapse behavior~\citep[see][]{2010PhRvD..82h4043B}, i.e. the bulk mass motion of the colliding stars. The reason for this discrepancy is the density dependence of $\Gamma_{\mathrm{th}}$, which drops from about 2 at supranuclear densities to about $4/3$ for densities below $\approx 10^{11}~\mathrm{g/cm^3}$ (see Fig.~2 in~\citet{2010PhRvD..82h4043B}). While the dynamics of the bulk mass of the merging objects, which is responsible for the gravitational-wave production, is fairly well captured with a choice of $\Gamma_{\mathrm{th}}\sim 2$, unbound fluid elements originating from the inner crust, where most of the ejecta stem from, encounter different density regimes. Consequently, the ejecta behavior cannot be well modelled with the $\Gamma_{\mathrm{th}}$ that is appropriate for high-density matter. We found the best compromise to be $\Gamma_{\mathrm{th}}\approx 1.5$, but we stress that the results of simulations using an approximate treatment of thermal effects should be taken with caution and do not need to be quantitatively reliable in all aspects. \subsection{Asymmetric binaries}\label{ssec:asym} Even though most NS binaries are expected to be nearly symmetric systems with a total mass of about 2.7~$M_{\odot}$, we investigate the mass ejection of an asymmetric setup to check whether also in this case an EoS dependence exists. The upper right panel of Fig.~\ref{fig:mejr135} displays the ejecta masses for simulations of asymmetric binaries with a 1.2~$M_{\odot}$ NS and a 1.5~$M_{\odot}$ NS. Again, the radius $R_{1.35}$ of a nonrotating NS is used to characterize different EoSs. Here we restrict ourselves to EoSs which provide the full temperature dependence. In comparison to the symmetric binary mergers the amount of unbound material is significantly larger. The ejecta masses are about a factor of two higher than for the symmetric binaries with the same total binary mass. Also for asymmetric binaries a decrease of $M_{\mathrm{ej}}$ with bigger $R_{1.35}$ is visible, but the scatter between models with similar $R_{1.35}$ is larger. The lower right panel of Fig.~\ref{fig:mejr135} shows the sum of the maxima of the coordinate velocities of the mass centers of the two asymmetric binary components. As in the symmetric case the two stars collide with a higher impact velocity if the initial radii of the NSs are smaller. \begin{figure*} \includegraphics[width=8.9cm]{f4a.eps} \includegraphics[width=8.9cm]{f4b.eps} \includegraphics[width=8.9cm]{f4c.eps} \includegraphics[width=8.9cm]{f4d.eps} \includegraphics[width=8.9cm]{f4e.eps} \includegraphics[width=8.9cm]{f4f.eps} \caption{\label{fig:snapasym}Same as Fig.~\ref{fig:snap}, but for the asymmetric 1.2-1.5~$M_{\odot}$ binary. Here the propagation speed of every 20th particle is indicated by an arrow and the side lengths of the panels differ from those of Fig.~\ref{fig:snap}. In the upper panels the lower-mass star is identified by the black particles.} \end{figure*} \begin{figure*} \includegraphics[width=8.9cm]{f5a.eps} \includegraphics[width=8.9cm]{f5b.eps} \caption{\label{fig:snap_xz}Distribution of the conserved rest-mass density (color-coded and logarithmically plotted in $\mathrm{g/cm^3}$) in a plane perpendicular to the orbital plane for the 1.35-1.35~$M_{\odot}$ merger (left) and the 1.2-1.5~$M_{\odot}$ merger (right) with the DD2 EoS. The three-dimensional positions of unbound particles are projected into the cross-sectional plane, green and white referring to the origin from the one and the other NS. The velocity of every 20th particle is indicated by an arrwo. The time of the snapshots is given below the color bar of each panel.} \end{figure*} Due to the asymmetry the dynamics of the merger proceeds differently from the symmetric case (see Fig.~\ref{fig:snapasym}). Prior to the merging the less massive binary component is deformed to a drop-like structure with the cusp pointing to the 1.5~$M_{\odot}$ NS (top panels). After the stars begin to touch each other, the lighter companion is stretched and a massive tidal tail forms (middle left panel). The deformed 1.2~$M_{\odot}$ component is wound around the more massive companion (middle panels). Also in the case of asymmetric mergers the majority of the ejecta originates from the contact interface of the collision, i.e. from the cusp of the ``tear drop'' and from the equatorial surface of the more massive companion, where the impact ablates matter (see top panels). Some matter at the tip of the cusp directly fulfills the ejecta criterion (top right panel), while the majority obtains an additional push by the interaction with the asymmetric, mass-shedding central remnant and the developing spiral arms (middle right and bottom panels). A smaller amount of ejecta of roughly 25 per cent originates from the outer end of the primary tidal tail (particles in the lower part of the top right panel). A part of this matter becomes unbound by tidal forces (at the tip of the tidal tail in the middle left panel) and the other fraction by an interaction with the central remnant (middle left panel). Figure~\ref{fig:snap_xz} displays the distribution of the ejecta in a plane perpendicular to the binary orbit for the symmetric merger (left panel) compared to the asymmetric merger (right panel) for the last timesteps shown in Fig.~\ref{fig:snap} and Fig.~\ref{fig:snapasym}, respectively. A considerable fraction of the ejected matter is expelled with large direction angles relative to the orbital plane. For a timestep about 5~ms later the ejecta geometry is visualized (azimuthally averaged) in Fig.~\ref{fig:ejectageo} excluding the bound matter. For both mergers the outflows exhibit a (torus or donut-like) anisotropy with an axis ratio of about 2:3. The velocity fields also show a slight dependence on the direction. \begin{figure*} \includegraphics[width=8.9cm]{f6a.eps} \includegraphics[width=8.9cm]{f6b.eps} \caption{\label{fig:ejectageo}Ejecta geometry visualized by the rest-mass density (color-coded and logarithmically plotted in $\mathrm{g/cm^3}$) excluding matter of the bound central remnant for the 1.35-1.35~$M_{\odot}$ merger (left) and the 1.2-1.5~$M_{\odot}$ merger (right) with the DD2 EoS. Density contours are obtained by azimuthal averaging. Arrows represent the coordinate velocity field where an arrow length of 200~km corresponds to the speed of light. The time of the snapshots is given below the color bar of each panel.} \end{figure*} \subsection{Binary parameter dependence}\label{ssec:binpara} The exploration of the full space of possible binary parameters is interesting for the determination of the highest and lowest possible ejecta mass for a given EoS and to understand the influence of the binary setup on the ejecta production. Such an investigation has been conducted only for one EoS (Shen) by Newtonian calculations~\citep{2012arXiv1206.2379K,2012arXiv1204.6240R,2012arXiv1210.6549R} and within a relativistic framework~\citep{2007A&A...467..395O}, which revealed quantitative differences between both approaches. Other surveys using different EoSs have been restricted to a limited variation of the binary masses~\citep{2011ApJ...736L..21R,2011ApJ...738L..32G,2012arXiv1212.0905H}. Here we present the dependence of the ejecta mass on the mass ratio $q=M_1/M_2$ and the total binary mass $M_{\mathrm{tot}}=M_1+M_2$ for a subset of EoSs employed in our study. The NL3, DD2 and SFHO EoSs are chosen because they are representative for the full set of possible EoSs: While the NL3 EoS is relatively stiff, resulting in $R_{1.35}=14.75$~km, the soft SFHO EoS produces rather compact NSs with $R_{1.35}=11.74$~km, and the DD2 represents an intermediate case with $R_{1.35}=13.21$~km. \begin{figure} \includegraphics[width=8.9cm]{f7a.eps} \includegraphics[width=8.9cm]{f7b.eps} \includegraphics[width=8.9cm]{f7c.eps} \caption{\label{fig:binpara3eos}Ejecta mass in $M_{\odot}$ as function of the mass ratio $q$ and the total binary mass $M_{\mathrm{tot}}$ for the soft SFHO EoS (upper panel), the intermediate DD2 EoS (center panel) and the stiff NL3 EoS (bottom panel). In all panels the simulated binary setups are marked by symbols. Crosses indicate the formation of a differentially rotating NS remnant, while circles identify configurations which lead to a direct gravitational collapse.} \end{figure} Figure~\ref{fig:binpara3eos} displays the amount of unbound matter as a function of the mass ratio $q$ and total binary mass $M_{\mathrm{tot}}$ for these three EoSs. The simulated binary configurations that form a differentially rotating NS are indicated in the figures by crosses, whereas systems leading to prompt black-hole formation (within about one millisecond after the first contact) are marked with circles (the ejecta properties are extracted 10~ms after the merging). All three EoSs show qualitatively the same behavior. A clear trend of increasing ejecta masses with larger binary asymmetry is visible. For symmetric binaries that do not undergo gravitational collapse there is also a slight tendency of higher total binary masses leading to more unbound material. The increase with $M_{\mathrm{tot}}$ is more pronounced for asymmetric systems. The occurrence of a prompt collapse results in a significant drop of the ejecta mass. This is an important qualitative difference to Newtonian calculations, which cannot determine and follow the relativistic gravitational collapse. The threshold for the prompt collapse depends sensitively on the EoS, and soft EoSs lead to a collapse for relatively small $M_{\mathrm{tot}}$. For all EoSs the maximum ejecta masses (in all cases slightly below 0.02~$M_{\odot}$) are four to ten times higher than the amount of unbound matter of the symmetric 1.35-1.35~$M_{\odot}$ binaries. Here, the absolute differences between the maximum and the minimum ejected mass for non-collapsing cases are larger for stiff EoSs like the NL3 and less pronounced for soft EoSs like the SFHO. Soft EoSs yield steeper gradients of the ejecta mass in the binary parameter space, i.e. a certain variation in the binary parameters leads to a larger change of the ejecta masses than it is the case for a stiff EoS. To a good approximation and ignoring cases with a prompt black hole formation, the setup with two stars of about 1.35~$M_{\odot}$ is the system that produces the smallest amount of ejecta for the majority of investigated EoSs. The SFHO EoS is one of the exceptions, for which, e.g., the 1.2-1.2~$M_{\odot}$ binary yields a factor two to three less ejecta than the 1.35-1.35~$M_{\odot}$ setup. Based on Newtonian calculations a fit formula for the ejecta mass as fraction of $M_{\mathrm{tot}}$ was proposed as a function of $\eta=1-4M_1M_2/(M_1+M_2)^2$ in~\citet{2012arXiv1206.2379K} and~\citet{2012arXiv1210.6549R}. Reviewing our data (even without the prompt collapse cases) we find a more complicated behavior and we can neither confirm the validity of the suggested fit formula nor find a generalization of it. This is not unexpected in view of the quantitative and qualitative differences between Newtonian and relativistic simulations discussed above. \subsection{Folding with binary populations}\label{ssec:fold} The dependence of the ejecta mass on the binary parameters is essential to determine the total amount of ejecta produced by the binary population within a certain time and thus to estimate the average amount of ejecta per merger event. The properties of the NS binary population are provided by theoretical binary evolution models, which still contain considerable uncertainties in many complexities of single star evolution and binary interaction. Using the standard model of~\citet{2012ApJ...759...52D} the folding of our results with the binary population yields an average ejecta mass per merger event of about $3.6\times 10^{-3}~M_{\odot}$ for the NL3 EoS, $3.2\times 10^{-3}~M_{\odot}$ for the DD2 EoS, and $4.3\times 10^{-3}~M_{\odot}$ for the SFHO EoS. Therefore, the ejecta masses of the 1.35-1.35~$M_{\odot}$ binary mergers give numbers for the three cases which approximate the average amount of ejecta per merger event quite well (within 70 per cent for NL3, 3 per cent for DD2, 11 per cent for SFHO). This finding is simply a consequence of the fact that the binary distribution is strongly peaked around nearly symmetric systems with $M_{\mathrm{tot}}\approx 2.5~M_{\odot}$ so that the average ejecta mass is not sensitive to the larger ejecta production of asymmetric systems in the suppressed wings of the binary distribution. \section{Nucleosynthesis}\label{sec:nucleo} \subsection{R-process abundances} The potential of NS mergers to produce heavy r-process elements in their ejecta has been manifested by several studies based on hydrodynamical simulations~\citep{1999ApJ...525L.121F,2010MNRAS.406.2650M,2011ApJ...736L..21R,2011ApJ...738L..32G,2012arXiv1206.2379K}. These investigations have considered only a few high-density EoSs (two EoSs were used in~\citet{2011ApJ...738L..32G}). Since the NS EoS affects sensitively the dynamics of NS mergers and thus the properties of the ejecta (amount, expansion velocity, electron fraction, temperature), we explore here the influence of the NS EoS on the r-process nucleosynthesis in a systematic way. For a selected, representative set of EoSs we extract the thermodynamical histories of fluid elements which get gravitationally unbound. For these trajectories nuclear network calculations were performed as in~\citet{2011ApJ...738L..32G}, where details on the reaction network, the temperature postprocessing and the density extrapolation beyond the end of the hydrodynamical simulations can be found. The reaction network includes all 5000 species from protons up to Z=110 lying between the valley of $\beta$-stability and the neutron-drip line. All fusion reactions on light elements, as well as radiative neutron captures, photodisintegrations, $\beta$-decays and fission processes are included. The corresponding rates are based on experimental data whenever available or on theoretical predictions otherwise, as prescribed in the BRUSLIB nuclear astrophysics library~\citep{2013AA.549.A110} Figure~\ref{fig:abund135135} shows the final nuclear abundance patterns for the 1.35-1.35~$M_{\odot}$ mergers described by the NL3 (blue), DD2 (red) and SFHO (green) EoSs. For every model about 200 trajectories were processed, which roughly correspond to about one tenth of the total ejecta. Comparing the final abundance distributions of the DD2 EoS for about 200 and the full set of 1000 fluid-element histories reveals a very good quantitative agreement, which proves that a properly chosen sample of about 200 trajectories is sufficient to be representative for the total amount of unbound matter. \begin{figure} \includegraphics[width=8.9cm]{f8.eps} \caption{\label{fig:abund135135}Nuclear abundance pattern for the 1.35-1.35~$M_{\odot}$ mergers with the NL3 (blue), DD2 (red) and SFHO (green) EoSs compared to the solar r-process abundance distribution (black).} \end{figure} \begin{figure} \includegraphics[width=8.9cm]{f9.eps} \caption{\label{fig:abund1215}Nuclear abundance pattern for the 1.2-1.5~$M_{\odot}$ mergers with the NL3 (blue), DD2 (red) and SFHO (green) EoSs compared to the solar r-process abundance distribution (black).} \end{figure} The scaled abundance patterns displayed in Fig.~\ref{fig:abund135135} match closely the solar r-process composition above mass number $A\approx 140$. In particular the third r-process peak around $A=195$ is robustly reproduced by all models. Above mass number $A\approx 100$ the results for the different NS EoSs hardly differ. For all three displayed models the peak around $A\approx 140$ is produced by fission recycling, which occurs when the nuclear flow reaches fissioning nuclei around $^{280}$No at the end of the neutron irradiation during the $\beta$-decay cascade. The exact shape and location of this peak are therefore strongly affected by the theoretical modeling of the fission processes (including in particular the fission fragment distribution of the fissioning nuclei) which are still subject to large uncertainties~\citep{go09}. Hence, the deviations from the solar abundance pattern between $A\approx 130$ and $A \approx 170$ are not unexpected, while the third r-process peak around $A=195$ is a consequence of the closed neutron shell at $N=126$, which is robustly predicted by theoretical models. Very similar results were obtained for NS merger models performed with the LS220 and Shen EoSs in~\citet{2011ApJ...738L..32G}. In Fig.~\ref{fig:abund1215} the normalized abundance patterns are shown for asymmetric 1.2-1.5~$M_{\odot}$ mergers employing the same representative EoSs as in Fig.~\ref{fig:abund135135}. Again a very good agreement between the solar r-process abundances and the calculated element distributions above $A\approx 130$ is found for all three high-density EoSs. This confirms earlier findings that the binary mass ratio has a negligible effect on the abundance yield distribution~\citep{2011ApJ...738L..32G}. It also confirms that the ejected abundance distribution is rather insensitive to the adopted EoS. Besides the three temperature-dependent EoSs considered above we conducted network calculations also for merger models computed with zero-temperature EoSs supplemented by an approximate treatment of thermal effects with a $\Gamma_{\mathrm{th}}=2$ ideal-gas component (BSk20, BSk21). In this case the temperature is estimated following a procedure described in~\citet{2008PhRvD..77h4002E}, which converts the specific internal energy to temperature values. Doing so it is assumed that the energy of the thermal ideal-gas component is composed of the thermal energy of an ideal nucleon gas and a contribution from ultrarelativistic particles (photon, possibly electrons, positrons and neutrinos). \begin{figure} \includegraphics[width=8.9cm]{f10.eps} \caption{\label{fig:abundbsk}Nuclear abundance pattern for the 1.35-1.35~$M_{\odot}$ mergers with the BSk20 (red) and BSk21 (blue) EoSs compared to the solar r-process abundance distribution (black).} \end{figure} The network calculations for the BSk20 and BSk21 EoSs yield an abundance pattern above $A\approx 130$ very similar to the other fully temperature-dependent EoSs (see Fig.~\ref{fig:abundbsk}). Differences between the fully consistent models and the simulations with approximate temperature treatment are found below mass number $A\approx 50$, where the calculations with the BSk EoSs yield a lower amount of elements with $5<A<50$ but a higher mass fraction of hydrogen, deuterium and helium. The reason is the higher temperatures found with the BSk EoSs at the beginning of the network calculations which lead to a reduced recombination of nucleons and $\alpha$-particles and consequently a smaller production of heavier nuclei. In this respect the conditions in the outflows of these models resemble the situation in the neutrino-driven winds of core-collapse supernovae but for significantly higher neutron excesses. Overall, it is reassuring that the BSk models yield a similar abundance pattern of r-process elements, although these calculations rely on an approximate incorporation of thermal effects and a rough estimate of temperatures. It is an important finding of our work that r-process elements are robustly produced for a representative, diverse sample of high-density EoSs and that the outcome is insensitive to the exact initial temperature conditions and the binary setup. \subsection{Merger rates} The above-mentioned variations in the production of light elements are also reflected in the fraction of the ejecta which end up as r-process elements. In Tab.~\ref{tab:nucleo} a clear difference is observed for temperature-dependent EoSs (NL3, DD2, SFHO) and the BSk EoSs with approximate temperature treatment. While in the former cases about 96 to 99 per cent of the ejecta are converted to r-process elements, only 93 to 95 per cent of the ejecta are processed to r-process material in the latter models. This variation is a consequence of the temperature history affecting the production of light nuclei, as discussed above. Despite these differences, it is justified to assume that almost the total amount of ejecta is converted to heavy r-process elements, also considering the uncertainties in the determination of the exact ejected masses from simulations. Furthermore, in Sect.~\ref{sec:masses} it was argued that the total amount of ejecta, and thus the r-process material produced per event by the population of NS binaries, can be well represented by the yield of the 1.35-1.35~$M_{\odot}$ merger. This allows an important consistency check by comparing the theoretically expected production to the observed amount of r-process matter with $A\gtrsim 130$ in the Galaxy, which is estimated to be about $4\times 10^3~M_{\odot}$~\citep{2000ApJ...534L..67Q}. In order to produce this amount of heavy r-process elements within the Galactic history for about $10^{10}$~yr, one requires a merger rate of $4\times 10^{-4}/\mathrm{yr}$ if every coalescence ejects on average $10^{-3}~M_{\odot}$, which in our EoS survey corresponds to the lower bound on the ejecta mass of 1.35-1.35~$M_{\odot}$ mergers (see upper left panel of Fig.~\ref{fig:mejr135}). Similarly, assuming that NS mergers are the dominant source of heavy r-process elements, the upper bound of $10^{-2}~M_{\odot}$ for the ejected mass from 1.35-1.35~$M_{\odot}$ binaries (i.e for EoSs with small $R_{1.35}$) would be compatible with a merger rate of $4\times 10^{-5}/\mathrm{yr}$. These rate estimates lie in the ballpark of theoretical predictions ranging from $10^{-6}$ to $10^{-3}$ per year~\citep{2010CQGra..27q3001A}. This implies that all EoSs of our survey are compatible with NS mergers being the dominant or a major source of r-process elements. This conclusion on the merger rate may be tested against future observations, in particular by multiple gravitational-wave detections or frequent observations of electromagnetic counterparts. Our work emphasizes that, in addition to a more accurate merger rate, information on the high-density EoS is needed to shed light on NS mergers as a major source of r-process elements. More specifically, a merger rate of about $4\times 10^{-5}/\mathrm{yr}$ may imply either that nearly all heavy r-process elements are made by NS mergers in the case of a soft high-density EoS with small NS radius $R_{1.35}$, or that only a tenth of the observed r-process material originates from NS binaries if a stiff NS EoS with large $R_{1.35}$ is confirmed. Inversely, considering the robustness of the r-process nucleosynthesis in NS mergers, one can infer that, assuming a minimal production of $\sim 10^{-3}~M_{\odot}$ of r-nuclei per event a constant merger rate during the life of the Galaxy, this rate cannot be higher than roughly $4\times 10^{-4}/\mathrm{yr}$; otherwise the galactic r-process material would be more than presently observed in the Galaxy. This bound is comparable to the ``optimistic'' limit given in~\citet{2010CQGra..27q3001A}. Thus, the r-process element content in the Galaxy establishes further, independent evidence for an upper limit on the merger rate below $\sim 4\times 10^{-4}/\mathrm{yr}$. A restriction to soft EoSs, e.g. from other physical constraints like~\citet{2010PhRvL.105p1102H}, would lead to a smaller upper limit for the event rate of NS mergers. \begin{table* \begin{ruledtabular} \caption{\label{tab:nucleo}Nucleosynthesis calculations} \begin{tabular}{l c c c c c c c c } \tableline \tableline Model $M_1$-$M_2$ & $M_{\mathrm{ej}}$ & $t_{\mathrm{peak}}$ & $M_{\mathrm{r-process}}/M_{\mathrm{ej}}$ & $^{232}$Th/$^{238}$U & $^{232}$Th/$^{235}$U & Th/Eu & $E_{\mathrm{heat}}$ & $f$ \\ & $(10^{-3}~M_{\odot})$ & (d) & & & & & (MeV/A) & \\ \tableline NL3 1.35-1.35 & 2.09 & 0.171& 0.989 & 1.644 & 1.072 & 0.695 & 3.34 & $1.8\times 10^{-6}$ \\ DD2 1.35-1.35 & 3.07 & 0.189& 0.980 & 1.671 & 1.080 & 0.627 & 3.13 & $1.6\times 10^{-6}$ \\ SFHO 1.35-1.35 & 4.83 & 0.228& 0.991 & 1.642 & 1.039 & 0.579 & 3.17 & $1.4\times 10^{-6}$ \\ NL3 1.2-1.5 & 7.95 & 0.338& 0.964 & 1.670 & 1.112 & 0.714 & 3.36 & $1.3\times 10^{-6}$ \\ DD2 1.2-1.5 & 8.79 & 0.354& 0.986 & 1.697 & 1.109 & 0.658 & 3.11 & $1.2\times 10^{-6}$ \\ SFHO 1.2-1.5 &13.39 & 0.418& 0.974 & 1.685 & 1.085 & 0.543 & 3.12 & $1.1\times 10^{-6}$ \\ BSk21 1.35-1.35 & 3.36 & 0.162& 0.948 & 1.660 & 1.023 & 0.704 & 2.97 & $1.6\times 10^{-6}$ \\ BSk20 1.35-1.35 & 4.68 & 0.195& 0.931 & 1.689 & 1.021 & 0.698 & 2.96 & $1.5\times 10^{-6}$ \\ \tableline \end{tabular} \tablecomments{Selected models for which nucleosynthesis calculations were performed. $M_{\mathrm{ej}}$ is the amount of unbound matter, whereas $t_{\mathrm{peak}}$ is the peak time of an optical transient associated with a NS merger (see Sect.~\ref{sec:opttrans}). The fourth column gives the fraction of the ejecta which is processed into r-process elements. The production ratios of certain elements and isotopes are provided in the fifth to seventh columns. $E_{\mathrm{heat}}$ denotes the total amount of energy released by radioactive decays (without neutrino energy). The factor $f$ approximates the radioactive heat generation around the time of the optical peak luminosity relative to the rest-mass energy of the ejecta (see text).} \end{ruledtabular} \end{table*} \subsection{Actinide production ratios and stellar chronometry} Some of the heaviest long-lived radioactive nuclei produced by the r-process can be used as nucleo-cosmochronometers. In particular the abundance ratios of thorium to europium and thorium to uranium have been proposed for estimating the age of the oldest stars in our Galaxy. More specifically, a simple comparison of the observed abundance ratio with the production ratio can provide an age estimate of the contaminated object~\citep{1987Natur.328..127B,1993A&A...274..821F,1999ApJ...521..194C,1999A&A...346..798G,2001A&A...379.1113G,2001Natur.409..691C,2007ApJ...660L.117F,2008ARA&A..46..241S}. In addition, if we consider low-metallicity stars polluted by a small number of nucleosynthetic events that took place just before the formation of the stars, the age of the star can be estimated without calling for a complex model of the chemical evolution of the Galaxy. The major difficulty of the methodology is therefore related to the theoretical estimate of the r-production ratio and the corresponding uncertainties of astrophysics and nuclear physics origin that may affect this prediction. In this respect, the ${}^{232}$Th to ${}^{238}$U chronometry has been shown to be relatively robust, in particular in comparison with the Th/Eu chronometry, i.e to be less affected by the still large astrophysics and nuclear physics uncertainties affecting our understanding of the r-process nucleosynthesis~\citep{1999A&A...346..798G,2001Natur.409..691C,2007ApJ...660L.117F}. In Tab.~\ref{tab:nucleo} we provide the production ratios of the ${}^{232}$Th to ${}^{238}$U isotopes based on our NS merger simulations. Assuming that r-process enhanced metal-poor stars were enriched by one or a few NS merger events, we can derive ages within this scenario. From the observed ratio of $\log{\mathrm{(U/Th)}}_{\mathrm{obs}}=-0.94\pm 0.09$ for the metal-poor star CS31082-001~\citep{2001Natur.409..691C} (see also \citet{2001A&A...379.1113G} for updated values), we compute the age as $ \Delta t=21.8 \left[ \log{\mathrm{(U/Th)}}_{0}-\log{\mathrm{(U/Th)}}_{\mathrm{obs}} \right]=15.7$~Gyr with the production ratio $\log{\mathrm{(U/Th)}}_{0}=-0.22\pm0.01$ (see Tab.~\ref{tab:nucleo}). The observational uncertainty of 0.09 dex in this case dominates the error on the age estimate since it amounts to about 2.0~Gyr, while the theoretical uncertainties associated with the different EoSs (Tab.~\ref{tab:nucleo}) give an 0.2~Gyr error only. Significant additional uncertainties stem from the nuclear physics aspects of the r-process nucleosynthesis; those will be studied within the NS merger model in a forthcoming paper. The derived age of this halo star lies within the ballpark of other age estimates~\citep{2001Natur.409..691C}. For the metal-poor star HE~1523-0901 ($\log{\mathrm{(U/Th)}}_{obs}=-0.86\pm0.13$)~\citep{2007ApJ...660L.117F} our ${}^{232}$Th to ${}^{238}$U production ratio implies an age of about $14.0\pm2.8$~Gyr, which is also within the range of other calculations~\citep{2007ApJ...660L.117F}. Tab.~\ref{tab:nucleo} also lists the production ratios of ${}^{232}$Th to ${}^{235}$U as well as Th to Eu. For age estimates the Th/Eu chronometer has been widely used, although it remains highly sensitive to all types of uncertainties. In this case, the stellar age is derived from $ \Delta t=46.7~{\rm Gyr} \left[ \log{\mathrm{(Th/Eu)}}_{0}-\log{\mathrm{(Th/Eu)}}_{\mathrm{obs}} \right]$, so that a 25\% error on the production or observed Th/Eu ratio gives rise to an uncertainty of about 5~Gyr on the stellar age. Special care of the associated uncertainties should therefore be taken when applying this chronometer pair~\citep{2001A&A...379.1113G}. Considering our production ratio of about $\log{\mathrm{(Th/Eu)}}_{\mathrm{obs}}=-0.20\pm 0.06$ (see Tab.~\ref{tab:nucleo}), we find an age of $ \Delta t=17.7\pm2.8$~Gyr for HE~1523-0901 ($\log{\mathrm{(Th/Eu)}}_{\mathrm{obs}}=-0.58$, with an additional 4.8~Gyr uncertainty based on observation~\citep{2007ApJ...660L.117F}). We stress again here that as long as the r-process site remains unidentified, corresponding uncertainties of the production ratios have to be taken into account as well as uncertainties arising from the nuclear physics input in network calculations. We also point out the possibility that there is a measurable event-to-event variation in the production ratios, for instance in the case of NS mergers caused by the unknown binary configuration (see Tab.~\ref{tab:nucleo}), but also for other sites a progenitor dependence cannot be excluded. The production ratios summarized in Tab.~\ref{tab:nucleo} are of particular importance since NS mergers may well be a major source of r-process elements and current supernova models cannot provide suitable conditions for the formation of the heaviest r-process elements~\citep{2008ApJ...676L.127H,2008A&A...485..199J,2010ApJ...722..954R,2010PhRvL.104y1101H,2010A&A...517A..80F,2011ApJ...726L..15W}. The relatively reliable age estimates from the ${}^{232}$Th to ${}^{238}$U ratio are compatible with the age of the universe, and thus NS mergers cannot be excluded as the source of the contamination of the considered metal-poor stars. \subsection{Nuclear heating}\label{ssec:heat} Another outcome of our nucleosynthesis calculations is the determination of the heating due to radioactive decays in the ejecta. This is in particular important because the radioactive heating provides the energy source for an optical counterpart associated with NS mergers (see~\citet{1998ApJ...507L..59L},~\citet{2005astro.ph.10256K} and~\citet{2010MNRAS.406.2650M} and Sect.~\ref{sec:opttrans}). Our calculations allow us to check the robustness and general behavior of the heating rate. In all models the heating rate due to radioactive decays (beta-decays, fission and alpha-decays) looks similar (see Fig.~3 in~\citet{2011ApJ...738L..32G}). For instance at the time $t_{\mathrm{peak}}$, when the luminosity of the optical transient reaches its maximum (typically several hours; see Sect.~\ref{sec:opttrans}), the heating rate varies from $3\times 10^{10}$~erg/g/s to $1\times 10^{11}$~erg/g/s for the cases where detailed nucleosynthesis calculations were made. This implies that the heating efficiency $f\equiv \dot{Q}(t_{\mathrm{peak}})t_{\mathrm{peak}}/{M_{\mathrm{ej}}c^2}$ that enters the estimates of optical emission properties (see Sect.~\ref{sec:opttrans} and~\citet{1998ApJ...507L..59L} and~\citet{2010MNRAS.406.2650M}), varies between $1.1\times 10^{-6}$ and $1.8\times 10^{-6}$ (see Tab.~\ref{tab:nucleo}). One observes a moderate dependence of $f$ on the EoS and the mass ratio. This is caused by the longer duration $t_{\mathrm{peak}}$ of the emission peak for larger ejecta masses in models with soft EoSs or asymmetric binaries. Overall $f\approx 1.5 \pm 0.3 \times 10^{-6}$ seems a fair approximation, which is half of the value suggested in~\citet{2010MNRAS.406.2650M} for a longer peak time of about one day. As detailed in Sect.~\ref{sec:opttrans}, we expect from our relativistic merger simulations shorter peak times in the range of 2 to 7 hours for symmetric binaries. The total amount of energy released by radioactive decays (without neutrinos) is in the range of about $3.2\pm0.2$~MeV per nucleon (see Tab.~\ref{tab:nucleo}). \section{Optical counterparts and radio remnants}\label{sec:opttrans} The radioactive decay of the synthesized r-process elements generates heat, which is deposited in the ejecta and powers an optical display~\citep{1998ApJ...507L..59L,2005astro.ph.10256K,2010MNRAS.406.2650M,2011ApJ...736L..21R,2011ApJ...738L..32G}. This electromagnetic counterpart of a NS merger is potentially observable with existing and upcoming optical surveys such as the Palomar Transient Factory, the Synoptic All Sky InfraRed Survey, the Panoramic Survey Telescope and Rapid Response System, and the Large Synoptic Survey Telescope (see e.g.~\citet{2009MNRAS.400.2070S} and~\citet{2012ApJ...746...48M} for a compilation of certain characteristics of these facilities). We also refer to~\citet{2011ApJ...734...96K} for a report on attempts to detect signatures of a radioactively powered transient in the light curve following a gamma-ray burst. The detection of such optical signals would provide valuable information on ejecta properties and the sky position of an event. As detailed in the next section, the peak luminosity, peak time, peak width, and effective temperature depend on the amount of ejecta and the expansion velocity. From an observation one might therefore derive these ejecta characteristics, which are otherwise only accessible by numerical simulations. Detecting radioactively powered emission and determining ejecta masses will consolidate the role of NS mergers for the enrichment of the Galaxy with heavy r-process elements. A precise localization of an unambiguously identified optical transient of a NS merger event will help improving the sensitivity of gravitational-wave detections, and will provide information about the host galaxy or environment. Moreover, if the distance scale of the events can be constrained, better observational limits on the merger rate will become available. \subsection{Model} The bolometric peak luminosity of an optical transient associated with a NS coalescence can be estimated by \begin{equation}\label{eq:Lpeak} L_{\mathrm{peak}}\approx 5\times 10^{41} \mathrm{erg/s} \left(\frac{f}{10^{-6}} \right)\left(\frac{v}{0.1 c} \right)^{1/2}\left(\frac{M_{\mathrm{ej}}}{10^{-2}M_{\odot}} \right)^{1/2} \end{equation} with the average outflow velocity $v$, the ejecta mass $M_{\mathrm{ej}}$ and the heating efficiency $f$ already introduced in Sect.~\ref{ssec:heat}~\citep{1982ApJ...253..785A,1998ApJ...507L..59L,2010MNRAS.406.2650M}. The time of the peak luminosity and the effective temperature at the time of the maximum luminosity can also be expressed as functions of the outflow velocity, the ejecta mass, and $f$: \begin{equation} t_{\mathrm{peak}}\approx 0.5~\mathrm{d} \left(\frac{v}{0.1 c} \right)^{-1/2}\left(\frac{M_{\mathrm{ej}}}{10^{-2}M_{\odot}} \right)^{1/2}, \end{equation} \begin{equation}\label{eq:Tpeak} T_{\mathrm{peak}}\approx 1.4\times 10^4 \mathrm{K} \left(\frac{f}{10^{-6}} \right)^{1/4} \left(\frac{v}{0.1 c} \right)^{-1/8}\left(\frac{M_{\mathrm{ej}}}{10^{-2}M_{\odot}} \right)^{-1/8} \end{equation} (see~\citet{1998ApJ...507L..59L} and~\citet{2010MNRAS.406.2650M}). Within this model the width $\Delta t_{\mathrm{peak}}$ of the luminosity peak is proportional to $t_{\mathrm{peak}}$~\citep{bookArnett,2009ApJ...703.2205K}. On the basis of the one-zone model in~\citet{2011ApJ...738L..32G} we find that the full width at half maximum can be very well approximated as \begin{equation} \Delta t_{\mathrm{peak}} \approx 2.5 t_{\mathrm{peak}}. \end{equation} The above formulas can be understood by some general considerations. For larger $M_{\mathrm{ej}}$ or smaller $v$ the ejecta need a longer time to become transparent. If the outflow gets optically thin at a later time, expansion cooling reduces the effective temperature at $L_{\mathrm{peak}}$. Since, however, the increase of the emission radius $R_{\mathrm{peak}}\approx v\times t_{\mathrm{peak}}$ dominates the decrease of the temperature in the Stefan-Boltzmann law, $L_{\mathrm{peak}}\propto T_{\mathrm{peak}}^4 R_{\mathrm{peak}}^2$, the peak luminosity increases with larger $M_{\mathrm{ej}}$ and higher $v$. The simple scaling laws of Eqs.~\ref{eq:Lpeak} to~\ref{eq:Tpeak} for the properties of optical transients are confirmed by calculations in~\citet{2010MNRAS.406.2650M},~\citet{2011ApJ...736L..21R} and~\citet{2011ApJ...738L..32G}. \subsection{EoS dependence} The panels on the left side of Fig.~\ref{fig:lpeakr135} display the peak luminosity, the peak time and the effective temperature of electromagnetic counterparts of 1.35-1.35~$M_{\odot}$ mergers for different EoSs, which are characterized by the radii of the corresponding 1.35~$M_{\odot}$ NSs. A clear dependence of the optical display on the compactness of the NSs can be seen, with soft EoSs yielding brighter transients, which peak on longer timescales with a lower effective temperature. The relatively clear relations are mainly a consequence of the strong EoS impact on $M_{\mathrm{ej}}$ (Fig.~\ref{fig:mejr135}), while the average outflow velocity varies only by a factor of 3 (see Fig.~\ref{fig:v135135}). The average expansion velocities show the tendency of being higher for EoSs which yield smaller $R_{1.35}$ (see Fig.~\ref{fig:v135135}). This is consistent with the reasoning used before that more compact NSs lead to more violent collisions. However, the relation between the average outflow velocity and $R_{1.35}$ is not very tight. For symmetric binaries the outflow velocities (measured 10~ms after the merging at which time the asymptotic values are fairly well determined) vary (with larger scatter) between 0.16 and 0.45 times the speed of light. \begin{figure*} \includegraphics[width=8.9cm]{f11a.eps}\includegraphics[width=8.9cm]{f11b.eps}\\ \includegraphics[width=8.9cm]{f11c.eps}\includegraphics[width=8.9cm]{f11d.eps}\\ \includegraphics[width=8.9cm]{f11e.eps}\includegraphics[width=8.9cm]{f11f.eps}\\ \caption{\label{fig:lpeakr135}Estimated properties of the optical transients for symmetric (1.35-1.35~$M_{\odot}$) mergers (left panels) and asymmetric (1.2-1.5~$M_{\odot}$) mergers (right panels) for different EoSs characterized by the NS radius $R_{1.35}$. The symbols have the same meanings as in Fig.~\ref{fig:mejr135}. The top panels show the bolometric peak luminosity, the middle panels the corresponding peak timescale, and the bottom panels the effective temperature at the time of the peak luminosity.} \end{figure*} \begin{figure} \includegraphics[width=8.9cm]{f12.eps} \caption{\label{fig:v135135}Average ejecta expansion velocity for 1.35-1.35~$M_{\odot}$ mergers (with symbols analog to Fig.~\ref{fig:mejr135}) and for 1.2-1.5~$M_{\odot}$ (red squares) for different EoSs characterized by the corresponding radius $R_{1.35}$ of a nonrotating NS with a mass of 1.35~$M_{\odot}$.} \end{figure} Also for asymmetric 1.2-1.5~$M_{\odot}$ binaries an EoS dependence of optical counterpart properties is observed (right side of Fig.~\ref{fig:lpeakr135}). As in the case of equal-mass mergers, EoSs with smaller $R_{1.35}$ lead to more mass ejection and therefore to more luminous events with longer $t_{\mathrm{peak}}$ and lower $T_{\mathrm{eff}}$. Compared to symmetric binaries asymmetric setups generally produce brighter transients, which reach their peak luminosities on longer timescales and therefore lower effective temperatures, because their ejecta masses are higher whereas the expansion velocities are comparable with those of symmetric systems described by the same EoS (see Fig.~\ref{fig:v135135}). For all considered quantities the asymmetric models exhibit a milder EoS dependence. Plotting the peak luminosity as a function of the binary mass ratio and the total binary mass for the NL3, DD2 and SFHO EoSs reveals a qualitatively similar behavior as the ejecta masses shown in Fig.~\ref{fig:binpara3eos}. The relations shown in Fig.~\ref{fig:lpeakr135} suggest the possibility to constrain NS radii and thus the high-density EoSs from observations of optical transients associated with NS mergers. Optimally such a detection could be supplemented by a gravitational-wave measurement, which provides the involved binary masses, the distance, and the merger time. But even without an associated gravitational-wave signal the observable features of a transient may have the potential to yield constraints for the NS EoS. The combinations of $L_{\mathrm{peak}},~t_{\mathrm{peak}}$, and $T_{\mathrm{eff}}$ vary systematically with the NS properties. For instance a low peak luminosity, a small peak width, and a high effective temperature imply a large NS radius. \subsection{Implications for observations} Symmetric 1.35-1.35~$M_{\odot}$~binaries are predicted to be the most common configurations and thus are likely to be the ones first and most frequently observed. Unfortunately, the 1.35-1.35~$M_{\odot}$~systems yield the smallest ejecta masses and thus the lowest luminosities and peak timescales. The peak widths are important to estimate the prospects of blind searches, whereas the peak time sets the scale for the response time after a gravitational-wave trigger. The nearly complete coverage of EoS possibilities by our survey allows us to determine the possible range of signal properties of optical transients associated with NS merger events. From our survey we find that the optical peak luminosity of a 1.35-1.35~$M_{\odot}$~NS merger should be expected to be between about $3\times 10^{41}$~erg/s and $14\times 10^{41}$~erg/s corresponding to absolute bolometric magnitudes of $M=-15.0$ and $M=-16.7$. The peak times range from only 2~hours to 7~hours, and the duration of the emission is expected to be between 4.8~hours and 18~hours depending on the high-density EoS. Note that the ballpark of our models yields fainter and shorter transients than typical estimates based on Newtonian models, which in general obtain higher ejecta masses and lower average expansion velocities~\citep{2011ApJ...736L..21R,2012arXiv1206.2379K,2012arXiv1204.6240R,2012arXiv1204.6242P,2012arXiv1210.6549R} (see also the discussion in Sect.~\ref{sec:masses}). While for the peak luminosity these differences lead to partially compensating effects, the timescales are more strongly affected. For symmetric binaries even the maximum peak time of about 7~hours found in our sample is well below most predictions based on Newtonian models~\citep{2010MNRAS.406.2650M,2011ApJ...736L..21R,2012arXiv1204.6242P,2012arXiv1204.6240R}. As shown in~\ref{fig:ejectageo}, during the early stages of the expansion the ejecta exhibit a fair asymmetry between polar and equatorial directions. The forumlas used in this section for estimating the properties of optical counterparts assume spherically symmetric outflows. It remains to be explored whether the donut-like shape visible in Fig.~\ref{fig:ejectageo} persists at late times or whether the outflow becomes more symmetric when the peak of the optical display occurs. Multidimensional radiation transport calculations coupled to long-term hydrodynamical simulations are required to determine to which extent the simplified emission model provides reliable estimates of the observable properties of the electromagnetic transients in dependence on the observer direction. \subsection{Radio flares} Another potentially observable phenomenon connected with NS mergers is radio emission that is produced by the interaction of the outflowing material with the ambient medium~\citep{2011Natur.478...82N,2012arXiv1204.6242P,2012arXiv1204.6240R}. The ejecta properties which determine the appearance of a radio remnant are the kinetic energy and the outflow velocity. The peak flux density was computed to be proportional to the total kinetic energy $E_{\mathrm{kin}}$ of the outflow and to the (initial) outflow velocity $v$ to a power of about 2.5~\citep{2012arXiv1204.6242P}. For symmetric mergers we find values for the kinetic energy between $6\times 10^{49}$~erg and $10^{51}$~erg (Fig.~\ref{fig:ekin}), whereas the average outflow velocities vary from 0.16 to 0.45 times the speed of light (Fig.~\ref{fig:v135135}). The kinetic energy scales well with the expansion velocity, i.e. the models with the highest outflow velocities yield also the highest kinetic energies, and configurations with smaller $v$ result in lower kinetic energies. This implies that the peak flux of radio remnants is uncertain by a factor of 200 because of the variations in the kinetic energy and expansion velocity associated with the incomplete knowledge of the NS EoS. Similar values are found for asymmetric mergers. \begin{figure} \includegraphics[width=8.9cm]{f13.eps} \caption{\label{fig:ekin}Kinetic energy of the ejecta for 1.35-1.35~$M_{\odot}$ mergers (with symbols analog to Fig.~\ref{fig:mejr135}) and for 1.2-1.5~$M_{\odot}$ (red squares) for different EoSs characterized by the corresponding radius $R_{1.35}$ of a nonrotating NS with a mass of 1.35~$M_{\odot}$.} \end{figure} \section{Summary and conclusions}\label{sec:sum} We have performed relativistic hydrodynamical simulations of NS mergers to investigate the mass ejection, the nucleosynthesis outcome, and the properties of associated optical transients. The main goal of this study was to explore the systematics of the EoS dependence of these aspects by employing a large set of candidate EoSs, while focussing mostly on binaries with two 1.35~$M_{\odot}$ NSs and on asymmetric 1.2-1.5~$M_{\odot}$~systems. The unbound ejecta mass is strongly affected by the adopted EoS. We find that the NS compactness is the crucial EoS parameter determining the ejecta properties. Using the radius $R_{1.35}$ of a nonrotating 1.35~$M_{\odot}$ NS to characterize different EoSs we find that ``softer'' EoSs which yield more compact NSs tend to produce more ejecta. The ejecta masses are between $10^{-3}~M_{\odot}$ and $1.5\times 10^{-2}~M_{\odot}$ depending on the EoS and binary mass ratio (Figs.~\ref{fig:mejr135} and~\ref{fig:binpara3eos}). Most of the unbound material originates from the contact interface between the colliding stars for symmetric as well as asymmetric binaries. In the latter case about 25 per cent of the mass ejection are shed off the outer end of the spiral arm like tail into which the lower-mass component is stretched during its final approach to collision with the more massive companion. A qualitative and quantitative agreement of our SPH simulations (employing the conformal flatness approximation) with fully relativistic grid-based simulations is found, whereas considerable differences compared to Newtonian models concerning the origin and the amount of ejecta are observed. The pronounced spiral arms, for example, which form in Newtonian simulations during the merging of symmetric binaries and whose mass stripping dominates the ejecta, are absent in relativistic mergers of equal-mass NSs. Newtonian models therefore tend to produce considerably higher ejecta masses. When temperature effects are mimicked by adding a thermal ideal-gas component with a constant ideal-gas index $\Gamma_{\mathrm{th}}$ to EoS models which are provided as zero-temperature barotropes, the best match of the ejecta masses with fully consistent calculations is achieved for a relatively low value of $\Gamma_{\mathrm{th}}=1.5$. This is in conflict with values of 1.8 or 2 which have been widely used and work well for gravitational-wave determinations~\citep{2012PhRvD..86f3001B,2010PhRvD..82h4043B}. This can be understood from the fact that the gravitational-wave signal is produced by the bulk of the merger mass in the high-density regime, whereas the ejecta depend on the thermodynamics of lower-density matter expanding away from the colliding stars and being accelerated by pressure forces. The binary parameters have qualitatively the same influence on the ejecta masses for all investigated EoSs. A larger binary mass asymmetry leads to a strong increase of the mass of unbound matter, whereas a higher total binary mass results in larger ejecta masses for asymmetric mergers but only a weak increase of $M_{\mathrm{ej}}$ for symmetric systems (Fig.~\ref{fig:binpara3eos}). The occurrence of a prompt collapse of the merger remnant is associated with a significant drop in the ejecta mass. For a given EoS (and no prompt collapse) the smallest amount of ejecta is to a good approximation produced by 1.35-1.35~$M_{\odot}$ binaries, while the ejecta mass can be up to a factor ten higher for other binary configurations. For soft EoSs the ejecta masses show steeper gradients in the binary parameter space ($q,~M_{\mathrm{tot}}$). Nuclear network calculations show that r-process elements with mass numbers above 130 are robustly produced in the ejecta of NS mergers for all investigated EoSs. The vast majority of ejected material is fission recycled producing a final mass-integrated abundance pattern that resembles closely the solar composition of r-process elements. The robustness with respect to variations of the high-density EoS confirms that NS mergers are a very promising source of r-process elements. For some EoS models which do not provide the temperature dependence consistently, the hydrodynamical simulations are based on an approximate treatment of thermal effects and the nucleosynthesis calculations rely on an estimation of the temperature in the ejecta. Also in these cases r-process elements are produced with solar distribution, showing the insensitivity to the exact temperature value at the beginning of the network calculations. Moreover, the abundance patterns are not affected by asymmetries in the binary setup. The amazing insensitivity of the outcome of the nucleosynthesis processes can be understood as a consequence of the large neutron excess in the ejected inner-crust material of the merging NSs, which allows for fission recycling in essentially all considered conditions. By folding with the binary population we identify the results of 1.35-1.35~$M_{\odot}$ mergers as a good approximation for the average ejecta mass per merger event. The main uncertainty in the average ejecta mass per merger is therefore associated with the incomplete knowledge of the NS EoS, which implies variations in the average ejecta mass of about a factor of 10. The observed abundance of r-process matter in the inventory of our Galaxy can be accounted for with a NS merger rate that is compatible with current predictions based on population synthesis and pulsar observations~\citep{2010CQGra..27q3001A}. Final conclusions, however, require a more precise determination of the merger rate, e.g. by gravitational-wave detections or observations of electromagnetic counterparts. Our work implies that in addition to more accurate information on the merger rate also better constraints on the high-density EoS are needed to decide whether NS mergers are a major (or the dominant) source of r-process elements. Moreover, our simulations in combination with estimates of the Galactic r-process material provide independent evidence that the Galactic merger rate cannot be higher than approximately $4\times 10^{-4}$ events per year if NS mergers should not overproduce heavy r-nuclei compared to observations. The nucleosynthesis calculations of our survey also provide important information on the production ratios of certain isotopes which are used for nucleocosmochronometry. For instance the ratio of 232-thorium to 238-uranium is found to be about 1.65 with only small variations depending on the high-density EoS and the binary configuration (Tab.~\ref{tab:nucleo}). Using this result we derived ages of metal-poor stars which are consistent with other age estimates. This implies that NS mergers are not excluded as r-process element sources for metal-poor stars and the production ratios provided here for the first time in the NS merger context should be taken into account in stellar age estimates as long as mergers cannot be excluded as r-process sites in the early Galactic history. Just as the ejecta masses of binary NS mergers exhibit a strong sensitivity to the properties of the nuclear EoS and thus to the radius $R_{1.35}$ of the merging stars, we also predict the optical transients powered by the radioactive energy release in the ejecta to depend on the compactness of the binary components. EoSs which lead to smaller NS radii produce more ejecta and therefore cause brighter optical counterparts, which peak on longer timescales with longer durations and with lower effective temperatures. On the basis of our extensive survey of EoSs, which suggests clear correlations between observable features (luminosity, peak timescale, effective temperature) and NS radii we propose that optical observations of transients associated with NS mergers could yield valuable constraints on the NS EoS. The very broad range of possibilities included in our EoS sample allows us to bracket the expected range of signal features of optical counterparts associated with NS mergers. Optical transients of 1.35-1.35~$M_{\odot}$ mergers should (at least) reach an absolute bolometric peak magnitude between -15.0 and -16.7 ($3 \times 10^{41}$~erg/s and $14 \times 10^{41}$~erg/s). Depending on the high-density EoS the peak times vary from 2 to 7 hours, implying durations of about 4 to 18~hours, whereas effective temperatures between $1.3\times 10^4$~K and $1.9\times 10^4$~K can be expected. We emphasize that the peak luminosities, peak times, and peak widths of the optical counterparts are found to be considerably lower in our analysis compared to earlier investigations based on Newtonian models~\citep{2010MNRAS.406.2650M,2011ApJ...736L..21R,2012arXiv1204.6242P,2012arXiv1204.6240R}. The reduction is a consequence of the smaller ejecta masses especially for symmetric binaries. Because of the shorter peak time and duration of the optical transient suggested by relativistic merger results, we also find a smaller fraction $f$ of radioactive decay energy relative to the rest-mass energy of the ejecta. While Newtonian merger models yield $f\sim 3\times 10^{-6}$ at the time of the luminosity peak~\citep{2010MNRAS.406.2650M} we obtain $f\sim 1.5\times 10^{-6}$ with little sensitivity to the EoS and the binary parameters. For different EoSs considerable differences are also found in the properties of radio remnants. Based on our sample of models we estimate an uncertainty of up to a factor of 200 for the theoretical predictions of the brightness of these events. Future work should address a variety of issues. The hydrodynamical models of NS mergers should include magnetic fields and the effects of neutrino interactions should be explored. Our results also need to be confirmed by fully relativistic merger simulations. The properties of emitted electromagnetic radiation should be computed for detailed multi-dimensional outflow models including the corresponding nuclear network calculations to determine the composition and heating. Radiative transfer calculations will have to be performed, employing appropriate opacities of r-process elements, to study the observational appearance of the potentially anisotropic ejecta dependent on the viewing direction. In this context it will also be important to determine the contribution of mass ejection from the secular evolution of the merger remnant (a black hole-torus system or hypermassive NS), which will lose mass through neutrino-driven and magnetohydrodynamical outflows. The corresponding matter will increase the ejecta mass and will ultimately have to be taken into account for reliable predictions of the properties of electromagnetic counterparts of NS mergers. The robustness of the nucleosynthesis outcome has to be explored concerning variations connected to uncertainties of the nuclear reaction rates. Further work is also needed to address how NS mergers as r-process sources fit into chemical evolution scenarios of the Milky Way, which should explain the observations of r-element enhanced metal-poor stars. Finally, the capabilities of various observational facilities have to be evaluated in view of the bounds on the observable features set by our survey. \begin{acknowledgments} We thank Matthias Hempel for helpful discussions and for providing his EoS tables. This work was supported by Sonderforschungsbereich Transregio 7 ``Gravitational Wave Astronomy", and the Cluster of Excellence EXC 153 ``Origin and Structure of the Universe" of the Deutsche Forschungsgemeinschaft, and by CompStar, a research networking programme of the European Science Foundation. S.G is F.R.S.-FNRS Research Associate. Computing resources provided by the Rechenzentrum Garching of the Max-Planck-Gesellschaft and Leibniz-Rechenzentrum Garching are acknowledged. \end{acknowledgments}
1302.6441
\section{Introduction} \label{sec1} The transverse collective flow of particles is an important characteristic of ultrarelativistic heavy-ion collisions because the flow is able to carry information about the early stage of the reaction. Particularly, the collective flow is very sensitive to change of the equation of state (EOS), e.g., during the quark-hadron phase transition. The azimuthal distribution of particles can be cast \cite{VoZh96,PoVo98} in the form of Fourier series \begin{equation} \displaystyle E \frac{d^3 N}{d^3 p} = \frac{1}{\pi} \frac{d^2 N}{dp_t^2 dy} \left[ 1 + \sum_{n=1}^{\infty} 2 v_n \cos(n\phi) \right] . \label{eq1} \end{equation} Here $\phi$, $p_t$, and $y$ are the azimuthal angle, the transverse momentum, and the rapidity of a particle, respectively. The unity in the parentheses represents the isotropic radial flow, whereas the sum of harmonics refers to anisotropic flow. The first two harmonics of the anisotropic flow, dubbed directed flow $v_1$ and elliptic flow $v_2$, have been extensively studied both experimentally and theoretically in the last 15 years (see, e.g., \cite{VPS10} and references therein), while the systematic study of higher harmonics began quite recently \cite{qm11_phenix,qm11_cms,qm11_atlas,qm11_alice}. In the present paper we investigate the ratio $R = v_4/v_2^2$ in heavy-ion collisions at energies of the Relativistic Heavy Ion Collider (RHIC) ($\sqrt{s} = 200${\it A}~GeV) and the Large Hadron Collider (LHC) ($\sqrt{s} = 2.76${\it A}~TeV). Interest in the study was raised due to the obvious discrepancy between the theoretical estimates and the experimental measurements. On the one hand, the exact theoretical result for hydrodynamics provided $v_4/v_2^2 = 0.5$ for a thermal freeze-out distribution \cite{BO06}. On the other hand, it was found soon in RHIC experiments \cite{v4v2_star,v4v2_phenix} that the measured ratio $R$ exceeded by factor 2 the theoretically predicted one. Both the STAR and the PHENIX Collaborations have reported that the $R$ is rather close to unity for all identified particles in a broad ranges of centrality, $10\% \leq \sigma/\sigma_{geo} \leq 70\%$, and transverse momentum, $p_T \geq 0.5$\,GeV/$c$. For the smaller $p_T$ the ratio seems to exceed the value of 1. Note also that the PHENIX data are about 10$-$15\% below the STAR ones. In Ref.~\cite{GO10} it was argued that the experimentally measured $R$ can be larger than 0.5 even if the ratio $v_4/v_2^2$ was exactly equal to 0.5 in each event. Such a distortion can be caused by event-by-event fluctuations. Namely, if the ratio $v_4/v_2^2$ is estimated not on an event-by-event basis but rather on averaging of both $v_2$ and $v_4$ over the whole statistics, the event-by-event fluctuations will significantly increase the extracted value of the ratio. Calculations of $R$ at RHIC energies within both ideal and viscous hydrodynamics with different initial conditions \cite{LGO10} revealed that the ideal hydrodynamics provided better agreement with the data, although the STAR results remained underpredicted a bit. For LHC the hydrodynamic calculations have predicted similar behavior with slight increase at small transverse momenta \cite{LGO10}. The preliminary results obtained in Pb + Pb collisions at $\sqrt{s} = 2.76${\it A}~TeV favor further increase of the $v_4/v_2^2$ ratio \cite{qm11_cms,qm11_atlas}. Moreover, this ratio is not a constant at $p_T \geq 0.5$\,GeV/$c$ but increases with rising transverse momentum. The first aim of the present paper is to study to what extent the hard processes, i.e., jets, can affect the ratio $R$ predicted by the hydrodynamic calculations. The second aim of the paper is investigation of the fulfillment of the so-called number-of-constituent-quark (NCQ) scaling, observed initially for the partial elliptic of mesons and baryons at RHIC \cite{ncq_star,ncq_phen}. Despite the general expectations, the measurements show that the NCQ scaling is broken at LHC energies \cite{ncq_alice}. Thus, it would be interesting to elucidate the role of jets in the scaling violation. For these purposes we employ the {\small HYDJET}++ model \cite{hydjet++}, which couples the parametrized hydrodynamics to jets. The soft part of the {\small HYDJET}++ simulated event represents the thermalized hadronic state where particle multiplicities are determined under assumption of thermal equilibrium. Hadrons are produced on the hypersurface, represented by a parametrization of relativistic hydrodynamics with given freeze-out conditions. At the freeze-out stage the system breaks up into hadrons and their resonances. The table of baryon and meson resonances implemented in the model is quite extensive. This allows for better accounting of the influence of final-state interactions on the generated spectra. The hard part of the model accounts for jet quenching effect, i.e., radiation and collisional losses of partons traversing hot and dense media. The contribution of soft and hard processes to the total multiplicity of secondaries depends on both centrality of the collision and its energy and is tuned by model parameters to RHIC and LHC data. The paper is organized as follows. A brief description of the {\small HYDJET}++ is given in Sec.~\ref{sec2}. Section~\ref{sec3} presents the results of calculations of both $v_2$ and $v_4$ for charged particles in both considered reactions. The even components of the anisotropic flow and their ratio $R = v_4/v_2^2$ are studied in the interval $10\% \leq \sigma/\sigma_{geo} \leq 50\%$ in four centrality bins. In Sec.~\ref{sec4} the interplay between jets and decays of resonances, as well as the roles of resonance decays in better realization and the jets in violation of the number-of-constituent-quark scaling are discussed. Conclusions are drawn in Sec.~\ref{sec5}. \section{The HYDJET++ event generator} \label{sec2} The Monte Carlo event generator {\small HYDJET}++ \cite{hydjet++} was developed for fast but realistic simulation of hadron spectra in both central and non-central heavy-ion collisions at ultrarelativistic energies. It consists of two parts. The {\small FASTMC} \cite{fastmc1,fastmc2} event generator deals with the hydrodynamic evolution of the fireball. Therefore, it describes the soft parts of particle spectra with the transverse momenta $p_T \leq 2$\,GeV/$c$. The hard processes are simulated by the {\small HYDJET} model \cite{hydjet} that propagates jets through hot and dense partonic medium. Both parts of the {\small HYDJET}++ generate particles independently. To allow for really fast generation of the spectra the {\small FASTMC} employs a parametrized hydrodynamics with Bjorken-like or Hubble-like freeze-out surface parametrization. Since at ultrarelativistic energies the particle densities at the stage of chemical freeze-out are quite high, a separation of the chemical and thermal freeze-out is also implemented. The mean number of participating nucleons $N_{part}$ at a given impact parameter $b$ is calculated from the Glauber model of independent inelastic nucleon-nucleon collisions. After that the value of effective volume of the fireball $V_{eff}$, that is directly proportional to $N_{part}$, is generated. When the effective volume of the source is known, the mean multiplicity of secondaries produced at the spacelike freeze-out hypersurface is calculated. Parametrizations of the odd harmonics of the anisotropic flow are not implemented in the present version of {\small HYDJET}++, whereas the elliptic flow is generated by means of the hydro-inspired parametrization that depends on momentum and spatial anisotropy of the emitting source. The model utilizes a very extensive table of ca. 360 baryon and meson resonances and their antistates together with the decay modes and branching ratios taken from the {\small SHARE} particle decay table \cite{share}. After the proper tuning of the free parameters, the {\small HYDJET}++ simultaneously reproduces the main characteristics of heavy-ion collisions at RHIC and at LHC, such as hadron spectra and ratios, radial and elliptic flow, and femtoscopic momentum correlations. The multiple scattering of hard partons in the quark-gluon plasma (QGP) is generated by means of the {\small HYDJET} model. This approach takes into account accumulating energy loss, the gluon radiation, and collisional loss, experienced by a parton traversing the QGP. The shadowing effect \cite{Tyw_07} is implemented in the model as well. The {\small PYQUEN} routine \cite{pyquen} generates a single hard $NN$ collision. The simulation procedure includes the generation of the initial parton spectra with {\small PYTHIA} \cite{pythia} and production vertexes at a given impact parameter, rescattering-by-rescattering simulation of the parton path length in a dense medium, radiative and collisional energy losses, and final hadronization for hard partons and in-medium emitted gluons according to the Lund string model \cite{lund}. Then, the full hard part of the event includes {\small PYQUEN} multi-jets generated around its mean value according to the binomial distribution. The mean number of jets produced in {\it A + A} events is a product of the number of binary $NN$ sub-collisions at a given impact parameter and the integral cross section of the hard process in $NN$ collisions with the minimal transverse momentum transfer, $p_T^{\rm min}$. Further details of the model can be found in Refs.~\cite{hydjet++,fastmc1,fastmc2,hydjet}. It is worth mentioning recent important modification of the {\small HYDJET}++. After the measurement of particle spectra in $pp$ collisions at LHC it became clear that the set of model parameters employed by the {\small PYTHIA}~6.4 version had to be tuned. Several modifications have been proposed \cite{p_perugia,p_atlas}. The application of standard {\small PYTHIA}~6.4 in the {\small HYDJET}++ led to too early suppression of elliptic flow of charged particles at intermediate transverse momenta in lead-lead collisions and, therefore, to the prediction of a weaker $v_2$ \cite{v2_prc09,sqm09} compared to the data. Recently, the {\small HYDJET}++ was modified \cite{hydjet_12} to implement the {\small Pro-Q20} tune of PYTHIA. In contrast to calculations of elliptic flow presented in \cite{v2_prc09,sqm09,sqm11}, all simulations of Pb + Pb reactions at LHC energies in the present paper are performed with the upgraded {\small HYDJET}++. \section{$v_2$ and $v_4$ from hydrodynamics and from jets} \label{sec3} For the investigations of the second and the fourth flow harmonics, ca. 60 000 gold-gold and ca. 50 000 lead-lead minimum bias collisions have been generated at $\sqrt{s} = 200${\it A}~GeV and $\sqrt{s} = 2.76${\it A}~TeV, respectively. The transverse momentum dependencies of $v_2$ and $v_4$ obtained for the centralities 20$-$30\% are shown in Fig.~\ref{fig1} for RHIC and in Fig.~\ref{fig2} for LHC energies. \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[scale=0.60]{v4_v2_Fig1.eps} } \caption{(Color online) Transverse momentum dependencies (triangles) of (a) $v_2$ and (b) $v_4$ of charged hadrons calculated within the {\small HYDJET}++ for Au + Au collisions at $\sqrt{s} = 200${\it A}~GeV at centrality $\sigma/\sigma_{\rm geo} = 20 - 30\%$. Histograms show flow of directly produced particles in hydro-calculations (dashed lines), total hydrodynamic flow (solid lines), and flow produced by jets (dotted lines). \label{fig1} } \end{figure} \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[scale=0.60]{v4_v2_Fig2.eps} } \caption{(Color online) The same as Fig.~\protect\ref{fig1} but for Pb + Pb collisions at $\sqrt{s} = 2.76${\it A}~GeV. \label{fig2} } \end{figure} Together with the resulting distributions for $v_2(p_T)$ and $v_4(p_T)$ we present separate contributions coming from (i) hadrons directly produced at the freeze-out hypersurface in the hydrodynamic part, (ii) direct and secondary hadrons created after the decays of resonances, and (iii) hadrons produced in the course of jet fragmentation. Recall briefly the main features of the $v_2(p_T)$ behavior in {\small HYDJET}++. The elliptic flow rises up to its maximum at intermediate $p_T$ around 2.5$-$3\,GeV/$c$ and then rapidly drops. This falloff is observed in experimental data also. In the model its origin is traced to the interplay between the soft hydrolike processes and hard jets, as was studied in details in \cite{v2_prc09,sqm09}. The ideal hydrodynamics demonstrates continuous increase of the elliptic flow with rising transverse momentum. Because of the jet quenching the jets also develop an asimuthal anisotropy that increases with the $p_T$ too; however, this effect is quite weak and does not exceed few percent. The particle yield as a function of the transverse momentum drops more rapidly for hydroproduced hadrons than for hadrons from jets. Therefore, after a certain $p_T$ threshold jet particles start to dominate the particle spectrum, thus leading to a weakening of the combined elliptic flow. A similar tendency is observed in Fig.~\ref{fig1} and Fig.~\ref{fig2} for the $v_4$ also, but, because of the quite weak signal in the hydrodynamic part, the effect of the $v_4$ falloff is not as pronounced as that of the elliptic flow. As shown in Fig.~\ref{fig1} decays of resonances can change the elliptic flow of directly produced hadrons with $p_T \leq 3$\,GeV/$c$ by 1$-$2\% at RHIC and by less than 1\% at LHC; see Fig.~\ref{fig2}. For the $v_4$ the difference between the two histograms is negligible; i.e., resonance decays play a minor role for soft parts of both $v_2(p_T)$ and $v_4(p_T)$ distributions. At $p_T \approx 2.5$\,GeV/$c$ jets come into play and change dramatically the shapes of the elliptic and hexadecapole flows. \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[scale=0.65]{v4_v2_Fig3.eps} } \caption{(Color online) $v_2(p_T)$ (full triangles) and $v_4(p_T)$ (full circles) for charged particles in {\small HYDJET}++ calculations of Au + Au collisions at $\sqrt{s} = 200${\it A}~GeV at centrality $\sigma/\sigma_{\rm geo}$ (a) $10--20\%$, (b) $20--30\%$, (c) $30--40\%$ and (d) $40--50\%$, respectively. Dashed lines show hydrodynamic part of the calculations. Data from \cite{v4v2_phenix} are shown by open triangles ($v_2$) and open squares ($v_4$). \label{fig3} } \end{figure} \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[scale=0.65]{v4_v2_Fig4.eps} } \caption{(Color online) The same as Fig.\protect\ref{fig3} but for Pb + Pb collisions at $\sqrt{s} = 2.76${\it A}~GeV. Experimental data are taken from \cite{alice_flow}. \label{fig4} } \end{figure} It is worth discussing here details concerning the determination of the flow components in the experiment and in the model. In the HYDJET++ simulations the elliptic flow is connected to the eccentricity of overlapped volume of colliding nuclei. No fluctuations in the location of nucleons within the overlapped zone are considered. Therefore, the flow is determined with respect to the position of true reaction plane. The next even component, $v_4$, is not parametrized in the present version of the model; i.e., the hexadecapole flow comes out here merely due to the elliptic flow. Thus, it should also be settled by the position of the true reaction plane. Because of the absence of the fluctuations and non-flow effects, the ratio $v_4/v_2^2$ obtained on an event-by-event basis equals that extracted by separate averaging of $v_4$ and $v_2$ over the whole simulated statistics. In the experiment the situation is more complex. For instance, in the standard event plane (EP) method the event flow vector $\vec{Q_n}$ for $n$-th harmonic is defined as (see \cite{VPS10} for details) \begin{eqnarray} \displaystyle \nonumber \vec{Q_n} &=& (Q_{n,x} , Q_{n,y}) = \left( \sum \limits ^{}_{i} w_i \cos{(n \phi_i)} , \sum \limits ^{}_{i} w_i \sin{(n \phi_i)} \right) \\ \label{eq2} &=& \left( Q_n \cos{(n \Psi_n)} , Q_n \sin{(n \Psi_n)} \right). \end{eqnarray} The quantities $w_i$ and $\phi_i$ are the weight and the azimuthal angle in the laboratory frame for the $i$th particle, respectively. From Eq.~(\ref{eq2}) it follows that the event plane angle $\Psi_n$ can be expressed via the {\it arctan2} function, which takes into account the signs of both vector components to place the angle in the correct quadrant, \begin{equation} \displaystyle \Psi_n = \arctan2(Q_{n,y} , Q_{n,x})/n \ . \label{eq3} \end{equation} The $n$th harmonic $v_n$ of the anisotropic flow at given rapidity $y$, transverse momentum $p_T$, and centrality $\sigma/\sigma_{geo}$ is determined with respect to the $\Psi_n$ angle \begin{equation} \displaystyle v_n(y, p_T, \sigma/\sigma_{geo}) = \langle \cos{[n(\phi_i - \Psi_n)]} \rangle \label{eq4} \end{equation} by averaging $\langle \ldots \rangle$ over all particles in all measured events. It is easy to see that the event plane angle for the elliptic flow $\Psi_2$ does not necessarily coincide with that for the hexadecapole flow $\Psi_4$. To compare our model results with the experimental ones we need, therefore, the data where the fourth harmonic is extracted with respect to the $\Psi_2$ rather than the $\Psi_4$ event plane angle. To demonstrate the development of both $v_2$ and $v_4$ at different centralities, we display the flow harmonics for charged particles in heavy-ion collisions at RHIC and LHC energies in Figs.~\ref{fig3} and \ref{fig4}, respectively. The experimental data by the PHENIX (RHIC) and the ALICE (LHC) Collaborations are plotted onto the simulations as well. One can see here that {\small HYDJET}++ overestimates the elliptic flow of charged hadrons with transverse momenta $2\,{\rm GeV} /c \leq p_T \leq 4$\,GeV/$c$ in both reactions considered. This indicates that simplified combination of ideal hydrodynamics and jets is probably enough to simulate first two even harmonics of anisotropic flow at $p_T \leq 2$\,GeV/$c$, whereas at higher transverse momenta other mechanisms, e.g., coalescence, should be taken into account for better quantitative description of the flow behavior. The elliptic flow produced by the jet hadrons with $p_T \leq 2$~GeV/$c$ is almost zero. Because of the jet quenching, the flow increases to $3--5\%$ with rising transverse momentum; however, the jets alone cannot provide strong flow signal, say $v_2 \approx 10\%$, even at LHC energies. Since the $v_4$ created by jets is also very small, it would be instructive to study how the admixture of jet hadrons can alter the $v_4/v_2^2$ ratio. \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[scale=0.65]{v4_v2_Fig5.eps} } \caption{(Color online) Ratio $v_4/(v_2)^2$ vs. $p_T$ for charged particles in {\small HYDJET}++ calculations of Au + Au collisions at $\sqrt{s} = 200${\it A}~GeV at centrality $\sigma/\sigma_{\rm geo}$ (a) $10--20\%$, (b) $20--30\%$, (c) $30--40\%$ and (d) $40--50\%$, respectively. Full circles denote the hydro+jet calculations, open circles show only hydro-part, and open squares indicate the rescaled experimental data (see text for details). \label{fig5} } \end{figure} \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[scale=0.65]{v4_v2_Fig6.eps} } \caption{(Color online) The same as Fig.~\protect\ref{fig5} but for Pb + Pb collisions at $\sqrt{s} = 2.76${\it A}~GeV. \label{fig6} } \end{figure} The ratio $R = v_4/v_2^2$ as a function of transverse momentum is presented in Figs.~\ref{fig5} and \ref{fig6} for four different centralities in Au + Au collisions at RHIC and in Pb + Pb collisions at LHC, respectively. The final result is compared here to the ratio obtained merely for hydro-like processes and to the experimental data. As was mentioned in \cite{GO10}, the measured ratio should be noticeably larger than 0.5. There are event-by-event fluctuations that increase $R$ even if both flow harmonics are determined by means of the $\Psi_2$ event plane angle. The increase occurs because of the averaging of both $v_2$ and $v_4$ over the whole event sample before taking the ratio. These fluctuations are lacking in the {\small HYDJET}++; therefore, the data used for the comparison are properly reduced. See \cite{GO10,LGO10} for details. It is seen that parametrized hydrodynamics with the extended table of resonances already provides $v_4/v_2^2 \approx 0.6$, which is higher than the theoretical value of $R = 0.5$. Jet particles increase this ratio further to value $R \approx 0.65$ at RHIC and $R \approx 0.7$ at LHC. While the ratio $R$ is insensitive to the transverse momentum at $0.1 {\rm GeV}/c \leq p_T \leq 3$\,GeV/$c$, at higher $p_T$ it increases with rising transverse momentum both in model simulations and in the experiment, although the RHIC data favor a weaker dependence. Thorough study of this problem within the hydrodynamic model indicates \cite{LGO10} that neither the initial conditions nor the shear viscosity can be accounted for the rise of high-$p_T$ tail of the distribution. It looks like this rise can be attributed solely to jet phenomenon. At LHC energy the increase of $R$ with rising transverse momentum at $p_T \geq 3$\,GeV/$c$ is quite distinct. The difference between the model results and the data visible for semiperipheral collisions at $40\% \leq \sigma/\sigma_{geo} \leq 50\%$ can be partly explained by the imperfect description of the elliptic flow at $p_T \geq 2.5$\,GeV/$c$; see Fig.~\ref{fig4}. Also, the STAR results concerning the $v_2$ are about 15$--$20\% higher than the PHENIX data, and the {\small HYDJET}++ model is tuned to averaged values provided by these two RHIC experiments. Nevertheless, the effect of hard processes is clear: The hydrodynamic part of the code yields rather flat ratio $v_4/v_2^2$, whereas the jets provide the rise of the high-$p_T$ tail. \section{Number-of-constituent-quark scaling} \label{sec4} The number-of-constituent-quark (NCQ) scaling in the development of elliptic flow was first observed in Au + Au collisions at RHIC \cite{ncq_star,ncq_phen}. If the elliptic flow, $v_2$, and the transverse kinetic energy, $K E_T \equiv m_T - m_0$, of any hadron species are divided by the number of constituent quarks, i.e., $n_q = 3$ for a baryon and $n_q = 2$ for a meson, then the scaling in $v_2(K E_T)$ holds up until $K E_T/n_q \approx 1$\,GeV \cite{PHENIX}. The observation of the NCQ scaling seems to favor the idea of the elliptic flow formation already on a partonic level. For instance, as pointed out in \cite{ncq_break}, the scaling is broken if hadrons are produced in the course of string fragmentation, whereas the process of quark coalescence leads to the scaling emergence. On the other hand, as was shown in Refs.~\cite{v2_prc09,sqm09}, the fulfillment of the NCQ scaling at ultrarelativistic energies depends strongly on the interplay between the decays of resonances and jets. Note that the breaking of the NCQ scaling at LHC was observed experimentally in \cite{ncq_alice,ncq_alice2}. \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[width=0.75\textwidth]{v4_v2_Fig7.eps} } \caption{(Color online) Upper row: The $KE_T/n_q$ dependence of elliptic flow for (a) direct hadrons, (b) hadrons produced both directly and from resonance decays, and (c) all hadrons produced in the {\small HYDJET}++ model for Au + Au collisions at $\sqrt{s} = 200${\it A}~GeV with centrality 20$--$30\%. Bottom row: The $KE_T/n_q$ dependence of the ratios $(v_2/n_q)\left/(v_2^p/3) \right.$ for (d) direct hadrons, (e) direct hadrons plus hadrons from the decays, and (f) all hadrons. \label{fig7} } \end{figure} \begin{figure} \resizebox{\linewidth}{!}{ \includegraphics[width=0.75\textwidth]{v4_v2_Fig8.eps} } \caption{(Color online) The same as Fig~.\protect\ref{fig7} but for Pb + Pb collisions at $\sqrt{s}=2.76${\it A}~TeV. \label{fig8} } \end{figure} To demonstrate the importance of both resonance decays and jets for the formation of NCQ scaling we plot the reduced functions $v_2^h/n_q (KE_T/n_q)$ for several hadronic species obtained in {\small HYDJET}++ simulations of heavy-ion collisions at RHIC (Fig.~\ref{fig7}) and at LHC (Fig.~\ref{fig8}) energies in centrality bin 20$--$30\%. These distributions are then also normalized to the flow of protons, $v_2^h/n_q : v_2^p/3$, to see explicitly degree of the scaling fulfillment. The study is subdivided into three steps. The flow of hadrons straight after the thermal freeze-out in hydrodynamic calculations is displayed in left windows. Central windows present this flow modified by the final state interactions, i.e., decays of resonances. Finally, right windows show the resulting flow of hadrons coming from all processes. At RHIC energy, it looks like at given centrality the direct pions, protons and kaons are produced already obeying the scaling within the 5$--$10\% accuracy limit; see Figs.~\ref{fig7}(a) and \ref{fig7}(d). The scaling holds also after decays of resonances as demonstrated in Figs.~\ref{fig7}(b) and \ref{fig7}(e). Its fulfillment becomes slightly worse when hadrons from jets are taken into account; however, the NCQ scaling remains valid within 10\% accuracy at least for three main hadron species. The situation is drastically changed for the collisions at LHC. Here spectra of directly produced particles do not possess any scaling properties, as one can see in Figs.~\ref{fig8}(a) and \ref{fig8}(d). After final-state interactions the scaling conditions for hadrons in hydrodynamic simulations are restored, as displayed in Figs.~\ref{fig8}(b) and \ref{fig8}(e). Even $\phi$ mesons follow the unique trend. Why? Spectra of many light hadrons, especially pions and protons, are getting feed-down from heavy resonances, whereas the spectrum of $\phi$ remains unchanged. The resonance boost makes elliptic flows of light hadrons harder. As a result, the NCQ scaling is fulfilled in a broad range of $KE_T/n_q$ in the hydro sector of the model. In contrast, hard processes cause significant distortions of particle spectra and lead to violation of the scaling conditions; see Figs.~\ref{fig8}(c) and \ref{fig8}(f), in accordance with experimental observations \cite{ncq_alice,ncq_alice2}. \section{Conclusions} \label{sec5} Formation of elliptic $v_2$ and hexadecapole $v_4$ flows of hadrons in Au + Au collisions at $\sqrt{s} = 200${\it A}~GeV and in Pb + Pb collisions at $\sqrt{s} = 2.76${\it A}~TeV is studied within the {\small HYDJET}++ model. This model combines the parametrized hydrodynamics with hard processes (jets). Therefore, the main aim was to investigate the role of interplay between soft and hard processes for the development of flow. Several features have been observed. First, the jets are found to increase the ratio $R = v_4 / v_2^2$ for both considered heavy-ion reactions. Second, jets lead to rise of the high-$p_T$ tail of the ratio $R$. Such a behavior is observed experimentally but cannot be reproduced by conventional hydro models relying on ideal or viscous hydrodynamics. Third, the resonance feed-down significantly enhances the flow of light hadrons and modifies their spectra toward the fulfillment of number-of-constituent-quark scaling. The flow of particles produced in jet fragmentation is quite weak, thus jets are working against the scaling. Due to interplay of resonance and jet contribution, the NCQ scaling works well only at certain energies, where jets are not abundant. Because jet influence increases with rising collision energy, just approximate NCQ scaling is observed at LHC despite the fact that the scaling holds for the pure hydrodynamic part of hadron spectra. At higher collision energies scaling performance should get worse. \begin{acknowledgments} Fruitful discussions with I.~Lokhtin, L.~Malinina, I.~Mishustin, and A.~Snigirev are gratefully acknowledged. We are thankful to J.-Y.~Ollitrault and K.~Redlich for bringing to our attention the $v_4/v_2^2$ problem. This work was supported in part by the QUOTA Program and Norwegian Research Council (NFR) under Contract No. 185664/V30. \end{acknowledgments}
2006.10462
\section{Introduction} A theory of gravity has been proposed in \cite{Bueno:2016xff} which is the most general up to-cubic-order-in-curvature theory of gravity that shares its graviton spectrum with Einstein theory on a constant curvature background. This Einsteinian cubic theory of gravity is not trivial in four dimensions and therefore recently it has attracted considerable interest \cite{Quiros:2020uhr,Marciu:2020ysf,KordZangeneh:2020qeg,Burger:2019wkq,Jiang:2019kks,Emond:2019crr,Erices:2019mkd,Mehdizadeh:2019qvc,Li:2019auk,Arciniega:2018fxj,Bueno:2018xqc,Bueno:2017sui,Hennigar:2017ego,Bueno:2017qce}. The numerical solution representing the asymptotically flat black hole in this theory was obtained in \cite{Hennigar:2016gkm,Bueno:2016lrh}, and the analytical approximation of the black hole metric was obtained in \cite{Hennigar:2018hza} using the general parametrization for spherically symmetric metrics suggested in \cite{Rezzolla:2014mua}. Further properties of this black hole, such as gravitational lensing and particle motion were studied in \cite{Poshteh:2018wqy,Hennigar:2018hza}. Theories with higher curvature corrections form an important class of theories which also appear in the low-energy limit of string theory, and therefore, black holes were extensively investigated in such theories of gravity (see, for example, \cite{Kanti:1995vq} and references therein). One of the most important characteristics of black hole geometry is its quasinormal spectrum \cite{Konoplya:2011qq}. Quasinormal modes dominate in the late time (ringdown) phase of the black hole's response to external perturbations. They are currently observed when detecting gravitational wave from astrophysical black holes \cite{Abbott:2016blz,TheLIGOScientific:2016src}. At the same time the current uncertainty in measurements of mass and angular momentum of black holes leaves considerable room for alternative theories of gravity \cite{alternative}, and the study of quasinormal spectra of black holes in various alternative theories of gravity is a necessary tool for further constraining of these theories. Another characteristic, essential for primordial and sufficiently small black holes, is Hawking radiation in the vicinity of the black hole horizon \cite{Hawking:1974sw}. Higher curvature corrections could represent quantum corrections to the black hole geometry and is, therefore, important in the regime of intensive Hawking evaporation. As it was shown for black holes with quadratic corrections in curvature, Hawking radiation is considerably affected by higher curvature corrections \cite{Konoplya:2020cbv,Konoplya:2019ppy,Zhang:2020qam,Li:2019bwg}, even when the deformation of the geometry is relatively small \cite{Konoplya:2010vz,Konoplya:2019hml}. In particular for higher dimensional Einstein-Gauss-Bonnet black holes \cite{Rizzo,Konoplya:2010vz} intensity of Hawking radiation of a black hole whose spacetime is only slightly deformed from the Tangherlini geometry may differ by a few orders. Therefore, it is tempting to learn whether the intensity of Hawking radiation is so sensitive characteristic in the Einsteinian cubic gravity as well. Finally, analysis of various radiation phenomena for the analytical approximation of the numerical black hole solution obtained in \cite{Hennigar:2018hza} at different orders of this approximation is interesting, because it allows us to test the accuracy of the analytical approximation in the context of the recent statement that spherically symmetric and asymptotically flat black holes can very well be described by only three parameters within this parametrization \cite{Konoplya:2020hyk}. Thus, looking at quasinormal modes of the above black hole with cubic curvature corrections when the metric is represented with various order of accuracy, that is, with larger or smaller number of parameters, we can have another test of this statement \cite{Konoplya:2020hyk}. Having all the above motivations in mind we will study quasinormal modes of scalar, electromagnetic and Dirac fields in the background of the four-dimensional spherically symmetric and asymptotically flat black hole in the Einsteinian cubic theory of gravity. We will also calculate grey-body factors of test fields for this case, and estimate the intensity of Hawking radiation. It will be shown that both real and imaginary part of quasinormal modes, representing respectively the real oscillation frequency and damping rate, are suppressed due to the cubic corrections. The intensity of Hawking radiation is also considerably decreased by the cubic corrections. The paper is organized as follows. In Sec. II we summarize the basic information on the Einsteinian cubic gravity and analytical approximation for the black hole metric obtained in \cite{Hennigar:2018hza}. Section III is devoted to calculations of quasinormal modes. In Sec. IV we calculate grey-body factors for test fields, while in Sec. V we find the energy emission rate and lifetime of the black hole under consideration. Finally, in the Discussion we summarize the obtained results and discuss open problems. \section{The black hole metric} The action for the Eisnteinian cubic gravity (ECG) has the form \cite{Bueno:2016xff}, \begin{equation} S=\frac{1}{16 \pi}\int \! d^{4}x \, \sqrt{-g} \left[R-\frac{\lambda}{6} \mathcal{P} \right], \end{equation} where $R$ is the usual Ricci scalar and \begin{align} \mathcal{P} =& \, 12 R_a{}^b{}_c{}^d R_b{}^e{}_d{}^f R_e{}^a{}_f{}^c + R_{ab}^{cd}R_{cd}^{ef}R_{ef}^{ab} \nonumber\\ &- 12 R_{abcd}R^{ac}R^{bd} + 8 R_a^b R_b^c R_c^a \, . \end{align} Here $\lambda$ is the coupling constant, representing ``the weight'' of the cubic term. Static, spherically symmetric solution was numerically obtained in \cite{Hennigar:2016gkm,Bueno:2016lrh} and has the following form: \begin{equation} ds^2 = -f dt^2+\frac{1}{f}dr^2+r^2d\Omega_{(2)}^2. \end{equation} The field equation for the metric function $f(r)$ is: \begin{align} & 2M = -(f-1) r-\lambda [\frac{f'^{3}}{3}+\frac{f'^{2}}{r}-\frac{2}{r^2} f(f-1)f' \\ &\hspace{1.2cm} -\frac{1}{r} ff'' (r f'-2(f-1))]. \nonumber \end{align} The mass and Hawking temperature of the black hole are given by the following relations \cite{Hennigar:2016gkm,Bueno:2016lrh}: \begin{subequations} \begin{eqnarray} M=\frac{r_0^3}{12 {\lambda}^2} \left[r_0^6+(2 \lambda-r_0^4) \sqrt{r_0^4+4 \lambda}\right],\\ T=\frac{r_0}{8 \pi \lambda} \left[\sqrt{r_0^4+4 \lambda}-r_0^2\right], \end{eqnarray} \end{subequations} where $r_{0}$ is the radius of the event horizon. Following \cite{Rezzolla:2014mua} the metric function can be represented in the following form \cite{Hennigar:2018hza}: $$f(x) = $$ \begin{equation} x\left[1-\varepsilon (1-x)+(b_0-\varepsilon)(1-x)^2+\widetilde{B}(x)(1-x)^3\right], \end{equation} where $x$ is a new compact coordinate, \begin{equation} x =1-\frac{r_0}{r}, \end{equation} and \begin{equation} \widetilde{B}(x) =\frac{b_1}{1+\frac{b_2 x}{1+\frac{b_3 x}{1+\cdots}}}. \end{equation} The above expressions represent an approximation of the numerical metric function in the whole space from the event horizon to infinity. This kind of representation was used to approximate numerical black hole solutions in a number of other theories, for example, in the Einstein-dilaton-Gauss-Bonnet \cite{Kokkotas:2017ymc}, Einstein-scalar-Gauss-Bonnet \cite{Konoplya:2019fpy}, Einstein-Weyl \cite{Kokkotas:2017zwt} and scalar-Maxwell \cite{Konoplya:2019goy}, quartic \cite{Khodabakhshi:2020hny} theories of gravity. This parametrization has been also extended to the axially symmetric spacetimes \cite{Konoplya:2016jvv,Younsi:2016azx}, representing rotating black holes. The privilege to use this continued fraction expansion is the superior convergence of the expansion which usually provides a compact analytical form approximating the numerical metric with sufficient accuracy. \begin{figure} \resizebox{\linewidth}{!}{\includegraphics*{ParameterA.eps}} \caption{The parameter $a$ as a function of the coupling constant $\lambda$ in the units $M=1$.}\label{fig:a} \end{figure} Parameter $\varepsilon$ determines the deviation of the radius of the event horizon from the Schwarzschild radius: \begin{equation} \varepsilon=\frac{2 M}{r_0}-1, \end{equation} while in order to match current values of post-Newtonian parameters, one must have \begin{equation} b_0 =0. \end{equation} The remaining coefficients $b_1$, $b_2$ etc. are fixed by the behavior of the metric near the event horizon and can be expressed in terms of $T$, $M$ as follows: \begin{equation} b_1 =4 \pi r_0 T+\frac{4 M}{r_0}-3, \end{equation} \begin{equation} b_2 =-\frac{r_0^3 a +16 \pi r_0^2 T+6 (M-r_0)}{4 \pi r_0^2 T+4 M-3 r_0}. \end{equation} Here, for small and moderate values of the coupling constant $\lambda$, the coefficient $a$ can be approximated by Eq. 16 of \cite{Hennigar:2018hza}, while in the general case it can be found only numerically, and here we plot the values of the parameter $a$ as a function of $\lambda$ (see Fig. \ref{fig:a}). Higher order correction is given by the nonzero parameter $b_3$, the explicit form for which can be found in the Appendix of \cite{Hennigar:2018hza}. However, as we will see that even the first order expansion given by the nonzero $b_1$ is sufficiently accurate, so that the second coefficient $b_2$ only slightly correct the observable quantities at sufficiently large values of the coupling constant $\lambda$. Thus, there is no practical sense to use the third order expansion for the metric. With the above equations at hand we can analyze quasinormal modes and Hawking radiation for this black hole metric. \section{Quasinormal modes of scalar, Dirac and electromagnetic fields} In this section we will study quasinormal modes of a scalar, Dirac and electromagnetic fields. The reduction of the perturbation equation to the master wavelike form for gravitational perturbations is highly nontrivial problem for the above theory and deserves separate consideration. However, in a plenty of cases the behavior of the quasinormal spectrum for test and gravitational fields is qualitatively the same and approaches the universal regime, independent of the spin of the field in the high frequency (eikonal) limit. The eikonal quasinormal modes of test fields are known to be dual to some characteristics of null geodesics \cite{Cardoso:2008bp,Konoplya:2017wot}. Moreover, already at sufficiently small values of $\ell$ the quasinormal modes for gravitational and test fields do not differ considerably. The general covariant equations for massless scalar, Dirac and electromagnetic fields have the forms \begin{equation}\label{KGg} \frac{1}{\sqrt{-g}}\partial_\mu \left(\sqrt{-g}g^{\mu \nu}\partial_\nu\Phi\right)=0, \end{equation} \begin{equation}\label{dirac} \gamma^{\alpha} \left( \frac{\partial}{\partial x^{\alpha}} - \Gamma_{\alpha} \right) \Psi=0, \end{equation} \begin{equation}\label{EmagEq} \frac{1}{\sqrt{-g}}\partial_\mu \left(F_{\rho\sigma}g^{\rho \nu}g^{\sigma \mu}\sqrt{-g}\right)=0. \end{equation} Here $F_{\rho\sigma}=\partial_\rho A_{\sigma}-\partial_\sigma A_{\rho}$, $A_\mu$ is a vector potential, $\gamma^{\alpha}$ are noncommutative gamma matrices and $\Gamma_{\alpha}$ are spin connections in the tetrad formalism. After separation of variables, Eqs. (\ref{KGg}), (\ref{dirac}), (\ref{EmagEq}) can be reduced to the second order differential wavelike equation, \begin{equation}\label{wave-equation} \frac{d^2\Psi_s}{dr_*^2}+\left(\omega^2-V_{s}(r)\right)\Psi_s=0, \end{equation} where $s=0$ corresponds to the scalar field, $s=1/2$ to the Dirac field and $s=1$ to the electromagnetic field. The ``tortoise coordinate'' $r_*$ is defined by the relation $$ dr_*=\frac{dr}{f(r)},$$ and the effective potentials are \begin{equation}\label{scalarpotential} V_{0}(r) = f(r)\left(\frac{\ell(\ell+1)}{r^2}+\frac{1}{r}\frac{d f(r)}{dr}\right), \end{equation} \begin{equation} V_{\pm1/2}(r) = \frac{\ell+\frac{1}{2}}{r}\left(\frac{f(r) (\ell+\frac{1}{2})}{r}\mp\frac{\sqrt{f(r)}}{r}\pm\frac{d \sqrt{f(r)}}{dr}\right), \end{equation} \begin{equation}\label{empotential} V_{1}(r) = f(r)\frac{\ell(\ell+1)}{r^2}. \end{equation} The effective potentials for scalar and electromagnetic fields have the form of a positive definite potential barrier with a single maximum. The effective potential for the Dirac field with the minus sign in front of the derivative of $f(r)$ has negative gap near the event horizon. However, the potential with opposite chirality is positive definite and according to \cite{Zinhailo:2019rwd} the stability immediately follows for spherically symmetric black holes due to the isospectrality of both effective potentials. Quasinormal modes $\omega_{n}$ ($n$ is the overtone number) correspond to solutions of the master wave equation (\ref{wave-equation}) with the requirement of the purely outgoing waves at infinity and purely incoming waves at the event horizon: \begin{equation} \Psi_{s} \sim \pm e^{\pm i \omega r^{*}}, \quad r^{*} \rightarrow \pm \infty. \end{equation} \begin{table} \begin{tabular}{p{1.4cm}cccc} \hline $\lambda$ & Sixth order WKB ($\tilde{m} =5$) & Time domain \\ \hline 0.1 & $0.109907-0.103986 i$ & $0.110381-0.106662 i$ \\ 5.1 & $0.098043-0.094181 i$ & $0.096954-0.094142 i$ \\ 10.1 & $0.093432-0.087633 i$ & $0.090291-0.089237 i$ \\ 15.1 & $0.089158-0.083783 i$ & $0.086726-0.086373 i$ \\ 20.1 & $0.085870-0.081412 i$ & $0.084032-0.084297 i$ \\ 25.1 & $0.083325-0.079730 i$ & $0.081824-0.082673 i$ \\ 30.1 & $0.081282-0.078419 i$ & $0.080154-0.081257 i$ \\ 35.1 & $0.079586-0.077339 i$ & $0.078650-0.080080 i$ \\ 40.1 & $0.078144-0.076416 i$ & $0.077387-0.078936 i$ \\ 45.1 & $0.076891-0.075608 i$ & $0.076269-0.078012 i$ \\ 49.6 & $0.075892-0.074956 i$ & $0.075326-0.077278 i$ \\ \hline \end{tabular} \caption{The fundamental quasinormal mode of the scalar field ($\ell=0$, $n=0$, $M =1$) as a function of $\lambda$. }\label{tab1} \end{table} \begin{table} \begin{tabular}{p{1.4cm}cccc} \hline $\lambda$ & Sixth order WKB ($\tilde{m} =5$) & Time domain \\ \hline 0.1 & $0.181420-0.096074 i$ & $0.181519-0.096383 i$ \\ 5.1 & $0.153629-0.084293 i$ & $0.153441-0.085286 i$ \\ 10.1 & $0.142867-0.079712 i$ & $0.142915-0.081136 i$ \\ 15.1 & $0.136024-0.077057 i$ & $0.136435-0.078456 i$ \\ 20.1 & $0.131134-0.075223 i$ & $0.131714-0.076507 i$ \\ 25.1 & $0.127369-0.073806 i$ & $0.128046-0.074954 i$ \\ 30.1 & $0.124324-0.072642 i$ & $0.125025-0.073694 i$ \\ 35.1 & $0.121773-0.071650 i$ & $0.121744-0.073107 i$ \\ 40.1 & $0.119583-0.070786 i$ & $0.120082-0.071668 i$ \\ 45.1 & $0.117667-0.070018 i$ & $0.118185-0.070931 i$ \\ 49.6 & $0.116128-0.069394 i$ & $0.116588-0.070183 i$ \\ \hline \end{tabular} \caption{The fundamental quasinormal mode of the Dirac field ($\ell=1/2$, $n=0$, $M =1$) as a function of $\lambda$. }\label{tab2} \end{table} \begin{table} \begin{tabular}{p{1.4cm}cccc} \hline $\lambda$ & Sixth order WKB ($\tilde{m} =5$) & Time domain \\ \hline 0.1 & $0.246431-0.091973 i$ & $0.246416-0.092013 i$ \\ 5.1 & $0.210903-0.081067 i$ & $0.210776-0.081191 i$ \\ 10.1 & $0.197552-0.076924 i$ & $0.197448-0.077013 i$ \\ 15.1 & $0.189197-0.074298 i$ & $0.189093-0.074385 i$ \\ 20.1 & $0.183132-0.072372 i$ & $0.181134-0.075830 i$ \\ 25.1 & $0.178386-0.070850 i$ & $0.178269-0.070923 i$ \\ 30.1 & $0.174496-0.069594 i$ & $0.174529-0.069749 i$ \\ 35.1 & $0.171208-0.068525 i$ & $0.171089-0.068585 i$ \\ 40.1 & $0.168364-0.067595 i$ & $0.168279-0.067650 i$ \\ 45.1 & $0.165861-0.066772 i$ & $0.165754-0.066772 i$ \\ 49.6 & $0.163842-0.066105 i$ & $0.163741-0.066129 i$ \\ \hline \end{tabular} \caption{The fundamental quasinormal mode of the electromagnetic field ($\ell=1$, $n=0$, $M =1$) as a function of $\lambda$. }\label{tab3} \end{table} \begin{figure} \centerline{\resizebox{\linewidth}{!}{\includegraphics*{WKBautoCubics0_b1_b2_Re.eps}}} \centerline{\resizebox{\linewidth}{!}{\includegraphics*{WKBautoCubics0_b1_b2_Im.eps}}} \caption{The fundamental ($n=0$) quasinormal mode computed by the sixth order WKB approach ($\tilde{m} =5$) for $\ell=0$ scalar perturbations as a function of $\lambda$, $M =1$, the blue line corresponds to the first order approximation ($b_1 \neq 0$, $b_2 =b_3 =...=0$): the red line corresponds to the second order approximation for the metric when $b_1 \neq 0$ and $b_2 \neq 0$.}\label{fig1} \end{figure} \begin{figure} \centerline{\resizebox{\linewidth}{!}{\includegraphics*{WKBautoCubics05_b1_b2_Re.eps}}} \centerline{\resizebox{\linewidth}{!}{\includegraphics*{WKBautoCubics05_b1_b2_Im.eps}}} \caption{The fundamental ($n=0$) quasinormal mode computed by the sixth order WKB approach ($\tilde{m} =5$) for $\ell=1/2$ Dirac perturbations as a function of $\lambda$, $M =1$, the blue line corresponds to the first order approximation ($b_1 \neq 0$, $b_2 =b_3 =...=0$): the red line corresponds to the second order approximation for the metric when $b_1 \neq 0$ and $b_2 \neq 0$.}\label{fig2} \end{figure} \begin{figure} \centerline{\resizebox{\linewidth}{!}{\includegraphics*{WKBautoCubics1_b1_b2_Re.eps}}} \centerline{\resizebox{\linewidth}{!}{\includegraphics*{WKBautoCubics1_b1_b2_Im.eps}}} \caption{The fundamental ($n=0$) quasinormal mode computed by the sixth order WKB approach ($\tilde{m} =5$) for $\ell=1$ electromagnetic perturbations as a function of $\lambda$, $M =1$, the blue line corresponds to the first order approximation ($b_1 \neq 0$, $b_2 =b_3 =...=0$): the red line corresponds to the second order approximation for the metric when $b_1 \neq 0$ and $b_2 \neq 0$.}\label{fig3} \end{figure} \begin{figure} \resizebox{\linewidth}{!}{\includegraphics*{TimaDomains1.eps}} \caption{Time-domain profile for the electromagnetic field for the multipole number $\ell=1$, $\lambda=0.1$ in the units $M=1$.}\label{fig:TD} \end{figure} For finding of the low-laying quasinormal modes we will use the two independent methods: \begin{enumerate} \item Integration of the wave equation (before the introduction of the stationary ansatz, that is, keeping the second derivative in time instead of $\omega^2$ in the wave equation) in time domain at a given point in space \cite{Gundlach:1993tp}. We will integrate the wavelike equation rewritten in terms of the light cone variables $u=t-r_*$ and $v=t+r_*$. The appropriate discretization scheme was proposed in \cite{Gundlach:1993tp}, $$ \Psi\left(N\right)=\Psi\left(W\right)+\Psi\left(E\right)-\Psi\left(S\right)- $$ \begin{equation}\label{Discretization} -\Delta^2\frac{V\left(W\right)\Psi\left(W\right)+V\left(E\right)\Psi\left(E\right)}{8}+{\cal O}\left(\Delta^4\right)\,, \end{equation} where we used the following notation for the points: $N=\left(u+\Delta,v+\Delta\right)$, $W=\left(u+\Delta,v\right)$, $E=\left(u,v+\Delta\right)$ and $S=\left(u,v\right)$. The initial data are given on the null surfaces $u=u_0$ and $v=v_0$. This method was used in a great number of works (see for example \cite{Konoplya:2008au,Churilova:2020bql,Konoplya:2019xmn,Konoplya:2014lha,Zhidenko:2008fp,Turimov:2019afv,Lin:2019fte,Dias:2020ncd} and references therein) and proved its efficiency for testing (in)stability \cite{Konoplya:2008au,Konoplya:2014lha,Dias:2020ncd}, because it takes into consideration contribution of all overtones for a given multipole number $\ell$. \item In the frequency domain we will use the WKB method of Will and Schutz \cite{Schutz:1985zz}, which was extended to higher orders in \cite{Iyer:1986np,Konoplya:2003ii,Matyjasek:2017psv} and made even more accurate by the usage of the Padé approximants in \cite{Matyjasek:2017psv,Hatsuda:2019eoj}. The higher-order WKB formula has the form \cite{Konoplya:2019hlu}, $$ \omega^2=V_0+A_2(\K^2)+A_4(\K^2)+A_6(\K^2)+\ldots - $$ \begin{equation}\nonumber \imo \K\sqrt{-2V_2}\left(1+A_3(\K^2)+A_5(\K^2)+A_7(\K^2)\ldots\right), \end{equation} where $\K$ takes half-integer values. The corrections $A_k(\K^2)$ of the order $k$ to the eikonal formula are polynomials of $\K^2$ with rational coefficients and depend on the values of higher derivatives of the potential $V(r)$ in its maximum. In order to increase accuracy of the WKB formula, we follow Matyjasek and Opala \cite{Matyjasek:2017psv} and use the Padé approximants. Here we will use the sixth order WKB method with $\tilde{m} =5$, where $\tilde{m}$ is defined in \cite{Matyjasek:2017psv,Konoplya:2019hlu}, because this choice provides the best accuracy for the Schwarzschild limit. \end{enumerate} Since both methods (the WKB method and time-domain integration) are very well known (see reviews \cite{Konoplya:2019hlu,Konoplya:2011qq}), we will not describe them in this paper in more detail, but will simply show that both methods are in a very good agreement in the common range of applicability. The first question which we would like to respond is how much quasinormal modes for the first order approximation of the metric (that is, when $b_2 = b_3 =...=0$) differ from those for the second ($b_3 =b_4...=0$) and higher orders. In other words, which order of the metric approximation is sufficient for description of the black hole geometry. From Figs. \ref{fig1}-\ref{fig3} one can see that already the first order approximation which is provided by only two parameters $\varepsilon$ and $b_1$ provides sufficient accuracy: adding next correction changes the quasinormal modes by a small fraction of $1 \%$. This happens because the metric function changes relatively softly in the region near the black hole approaching the asymptotic regime relatively slowly. This class of black hole metrics was called in \cite{Konoplya:2020hyk} ``moderate'' and is very well approximated by only a few parameters. We also observe that the damping rate given by the imaginary part of the quasinormal frequency is decreasing when the coupling constant $\lambda$ is increased, which means longer lived modes once the cubic correction is turned on. The real oscillation frequency is decreasing as well when $\lambda$ grows. The results obtained with the help of the WKB method although known to be sufficiently accurate when the Padé summation is applied still need additional check, which was performed by the time-domain integration. The results represented in Tables I, II, III show that there is a very good agreement between the two methods and the difference between the results obtained by both methods is much smaller than the effect, that is, the deviation of the quasinormal frequency from its Schwarzschild value. The typical time domain profile is shown on Fig. \ref{fig:TD} and it has the power-law tail in the end of the ringdown phase. Let us notice that the worst situation as to WKB accuracy and comparison with time-domain data is the scalar $\ell=0$ mode, for which, on the one hand, the WKB approach is less accurate than for $\ell>n$ modes, and, on the other hand, there are usually only a few damped oscillations in the signal before the domination of the asymptotic power-law tails. Here, fortunately, power-law tails begin at sufficiently late times and several oscillations occurs even for the lowest $\ell=0$ multipole, so that the Prony method allows one to extract the value of the quasinormal frequency with the sufficient accuracy. Prolonged period of quasinormal ringing which we observe in the time domain is phenomenon that may depend on the initial wave packet rather than on the gravitational theory. Unlike quasinormal frequencies, this characteristic depends not only on the parameters of the black holes, but also on the initial conditions. Therefore, apparently, looking at different initial conditions mimicking real astrophysical processes, we could learn whether the prolonged period of quasinormal oscillations is an objective fact and not an artifact of the integration scheme and initial conditions. \section{Grey-body factors} \begin{figure} \resizebox{\linewidth}{!}{\includegraphics*{CubicGBFDirac.eps}} \caption{Grey-body factors for the Dirac field $k=1$, $\lambda=0.01$ (blue), $15$ (green), $30$ (red), $40$ (yellow), $50$ (light blue), $M=1$.}\label{fig:GBDirac} \end{figure} \begin{figure} \resizebox{\linewidth}{!}{\includegraphics*{CubicGBFMaxwell.eps}} \caption{Grey-body factors for the Maxwell field $\ell=1$, $\lambda=0.01$ (blue), $15$ (green), $30$ (red), $40$ (yellow), $50$ (light blue), $M=1$.}\label{fig:GBMaxwell} \end{figure} In order to calculate the fraction of particles scattered back by the effective potential to the event horizon and learn which is the flow of particles which reaches the distant observer, we need to solve the spectral problem with different (from quasinormal) boundary conditions. We will study wave equation (\ref{wave-equation}) with the boundary conditions allowing for incoming waves from infinity. Owing to the symmetry of the scattering properties this is identical to the scattering of a wave coming from the horizon, which is natural if one wants to know the fraction of particles reflected back to the horizon. Thus, the scattering boundary conditions for Eq. (\ref{wave-equation}) have the form \begin{equation}\label{BC} \begin{array}{ccll} \Psi_{\ell} &=& e^{-i\omega r_*} + R_{\ell} e^{i\omega r_*},& r_* \rightarrow +\infty, \\ \Psi_{\ell} &=& T_{\ell} e^{-i\omega r_*},& r_* \rightarrow -\infty, \\ \end{array \end{equation} where $R_{\ell}$ and $T_{\ell}$ are called the reflection and transmission coefficients (for a given multipole number $\ell$), so that one has \begin{equation}\label{1} \left|T_{\ell}\right|^2 + \left|R_{\ell}\right|^2 = 1. \end{equation} Once the reflection coefficient is found, we can calculate the transmission coefficient for each $\ell$ using the WKB approach as follows: \begin{equation} \left|A_{\ell}\right|^2=1-\left|R_{\ell}\right|^2=\left|T_{\ell}\right|^2. \end{equation} \begin{equation}\label{moderate-omega-wkb} R = (1 + e^{- 2 i \pi K})^{-\frac{1}{2}}, \end{equation} where $K$ can be found from the following equation: \begin{equation} K - i \frac{(\omega^2 - V_{0})}{\sqrt{-2 V_{0}^{\prime \prime}}} - \sum_{i=2}^{i=6} \Lambda_{i}(K) =0. \end{equation} Here $V_0$ is the maximum of the effective potential, $V_{0}^{\prime \prime}$ is the second derivative of the effective potential in its maximum with respect to the tortoise coordinate $r_{*}$, and $\Lambda_i$ are higher order WKB corrections which depend on up to $2i$th order derivatives of the effective potential at its maximum \cite{Schutz:1985zz,Iyer:1986np,Konoplya:2003ii,Matyjasek:2017psv,Hatsuda:2019eoj} and $K$. This approach at the sixth WKB order was used for finding transmission/reflection coefficients of various black holes and wormholes in \cite{Konoplya:2019ppy,Konoplya:2019hml}, and the comparison of the WKB results for the energy emission rate of Schwarzschild black hole done in \cite{Konoplya:2019ppy} is in an excellent concordance with the numerical calculations of the well-known work by Don Page \cite{Page:1976df}. Here we will mostly use the sixth order WKB formula of \cite{Konoplya:2003ii} and, sometimes, apply lower orders when small frequencies and lower multipoles are under consideration. Fortunately, the WKB method works badly for small frequencies only, that is, in the region where the reflection is almost total and the grey-body factors are close to zero. Therefore, this inaccuracy of the WKB approach at small frequencies does not affect our estimations of the energy emission rates. From Figs. \ref{fig:GBDirac}, \ref{fig:GBMaxwell} one can see that the grey-body factors for a given value of the real frequency $\omega$ for both Dirac and Maxwell fields are considerably increased when the coupling constant $\lambda$ is turned on. This means that the height of the potential barrier surrounding the black hole is smaller at the increasing $\lambda$, which allows a greater number of particles to penetrate the barrier. Thus, the grey-body factors work for the enhancing of the Hawking radiation. However, the total effect usually depends more on the temperature of the black hole than on the grey-body factors, and this aspect will be studied in the next section, where we will calculate the corresponding energy emission rates. \begin{figure} \resizebox{\linewidth}{!}{\includegraphics*{CubicTemperature.eps}} \caption{Hawking temperature as a function of $\lambda$, $M=1$.}\label{figTemperature} \end{figure} \begin{figure*} \resizebox{\linewidth}{!}{\includegraphics*{CubicDistributionMaxwell.eps}\includegraphics*{CubicDistributionDirac.eps}} \caption{Energy emission rate $\partial^{2} E/\partial t \partial \omega $ for the Maxwell (left) and Dirac (right) fields as a function of $\omega$, $M=1$, $\lambda=2.5$. Blue is for $\ell=1$ ($k=1$ for Dirac); red is for $\ell=2$ ($k=2$ for Dirac). The contribution of the third multipole is very small, but still used in the calculations. }\label{figDistribution} \end{figure*} \begin{figure*} \resizebox{\linewidth}{!}{\includegraphics*{CubicFluxMaxwell.eps}\includegraphics*{CubicFluxDirac.eps}} \caption{Total emission $dE/dt$ for the Maxwell (left) and Dirac (right) fields as a function of $\lambda$, $M=1$.}\label{fig:TotalDirac} \end{figure*} \begin{figure} \resizebox{\linewidth}{!}{\includegraphics*{CubicLifeTime.eps}} \caption{Lifetime of the black hole $\tau$ as a function of $\lambda$, $M=1$ in the usual (top) and ultrarelativistic (bottom) regimes.}\label{figLifeTime} \end{figure} \section{Intensity of Hawking radiation and the black hole lifetime} We will assume that the black hole is in the thermal equilibrium with its environment in the following sense: the temperature of the black hole does not change between the emissions of two consequent particles. This implies that the system can be described by the canonical ensemble \cite{Hawking:1974sw}. Therefore, the energy emission rate for Hawking radiation is calculated by the formula \cite{Hawking:1974sw}, \begin{equation}\label{energy-emission-rate} \frac{\text{d}E}{\text{d} t} = \sum_{\ell} N_{\ell} \left| A_{\ell} \right|^2 \frac{\omega}{\exp\left(\omega/T \right)\pm1} \frac{\text{d} \omega}{2 \pi}, \end{equation} were $T_H$ is the Hawking temperature, $A_{\ell}$ are the grey-body factors, and $N_{\ell}$ are the multiplicities, which only depend on the space-time dimension and $\ell$. The Hawking temperature for spherically symmetric black hole is \begin{equation} T = \frac{1}{4 \pi} f'(r) \bigg|_{r=r_{0}}. \end{equation} The multiplicity factors for the four-dimensional spherically symmetrical black holes case consists of the number of degenerated $m$-modes (which are $m = -\ell, -\ell+1, ....-1, 0, 1, ...\ell$, that is $2 \ell +1$ modes) multiplied by the number of species of particles which depends also on the number of polarizations and helicities of particles. Therefore, we have \begin{equation} N_{\ell} = 2 (2 \ell+1) \qquad (Maxwell), \end{equation} \begin{equation} N_{\ell} = 8 k \qquad (Dirac). \end{equation} Here $k=\ell + 1/2$ for the Dirac field. The multiplicity factor for the Dirac field is fixed taking into account both the ``plus'' and ``minus'' potentials which are related by the Darboux transformations, which leads to the isospectral problem and the same grey-body factors for both chiralities. We will use here the ``minus'' potential, because the WKB results are more accurate for that case in the Schwarzschild limit. From Fig. \ref{figTemperature} one can see that Hawking temperature monotonically decays when the coupling constant $\lambda$ is increased. Therefore, the temperature factor, unlike transmission coefficients calculated in the previous section, works for suppression of the Hawking radiation. We can see that, as a result, the exponential temperature factor becomes more influential: the total energy emission rates of both electromagnetic and Dirac fields are monotonically decreasing (see Fig. \ref{fig:TotalDirac}). Notice that also intuitively it would be expected that the colder black hole will provide smaller flux of radiation to a distant observer, this is not always so, and for example, in the Einstein-Weyl theory \cite{Konoplya:2019ppy}. There are two different regimes of emission of particles \cite{Page:1976df}: a) when the black hole mass is sufficiently large, so that the radiation of massive particles can be neglected and the flux consists mainly of massless electron and muon neutrinos, photons, and gravitons and b) when the black hole mass is sufficiently small and emission of electrons and positrons will occur ultrarelativistically. In the second case, the radiation of electrons and positrons can be approximated by the massless Dirac field and the emission rate of all the Dirac particles must be doubled. Supposing that the peak in the Dirac particles' spectrum $\partial^{2} E/\partial t \partial \omega$ occurs at some $\omega \approx \xi M^{-1} $, we can see from fig. \ref{figDistribution} that this peak almost does not change when $\lambda$ is increased. The same is true even for sufficiently large $\lambda$. Therefore, the range of energies in which the ultra-relativistic regime of radiation takes place is roughly the same in the Einsteinian cubic gravity as for the Schwarzschild black hole, that is, at \begin{equation}\nonumber m_{e} = 4.19 \times 10^{-23} m_{p} \ll \xi M^{-1} \ll m_{\mu} = 8.65 \times 10^{-21} m_{p}. \end{equation} The energy emitted causes the black hole mass to decrease at the following rate \cite{Page:1976df} \begin{equation} \frac{d M}{d t} = -\frac{\hbar c^4}{G^2} \frac{\alpha_{0}}{M^2}, \end{equation} where we have restored the dimensional constants. Here $\alpha_{0} = d E/d t$ is taken for a given initial mass $M_{0}$. Since most of its time the black hole spends near its original state $M_{0}$ and integrating of the above equation gives us the lifetime of a black hole, \begin{equation} \tau = \frac{G^2}{\hbar c^4} \frac{M_{0}^3}{3 \alpha_{0}}. \end{equation} From Fig. \ref{figLifeTime} one can see that the lifetime of the black hole is increased by almost one order in comparison with the Schwarzschild limit (for which we reproduce the results of \cite{Page:1976df}). The ultrarelativistic emission is characterized by more intensive evaporation process (lower line). At large values of the coupling constant $\lambda$ the lifetime $\tau$ is roughly proportional to $\lambda$, \begin{equation} \tau \approx 8.7\times 10^{-18} (1+ 0.36 \lambda), \end{equation} and for the ultrarelativistic regime we have \begin{equation} \tau \approx 4.8\times 10^{-18} (1+ 0.36 \lambda). \end{equation} Here we did not consider emission of gravitons. However, as is known from a number of papers, in the four-dimensional theories contribution of gravitons in the total energy emission is usually very small. It consists of about $1\% -2\%$ of the total emission in the Schwarzschild black hole \cite{Page:1976df} and in the 4D Einstein-Gauss-Bonnet black holes \cite{Konoplya:2020cbv}. As the effect, that is the deviation of the energy emission rate from its Schwarzschild value, exceeds $100 \%$, here we can safely neglect contribution of gravitons for qualitative understanding of the Hawking radiation. Thermodynamic properties of a more general class of black holes with higher curvature correction have been recently studied in \cite{Bueno:2017qce}. There is not any data which could be used for the direct comparison with our results, because here we concentrate on the cubic theory and calculations of the intensity of Hawking radiation and grey-body factors. Nevertheless, there are two important conclusions made in \cite{Bueno:2017qce} which also supports our results. First is about the duration of the semiclassical regime. In Table I of \cite{Bueno:2017qce} it was noticed that the breakdown of the semiclassical regime occurs at a somewhat different minimal mass which depends on the new energy scale and this means that the semiclassical regime is clearly determined and the appropriate standard approaches can be used for calculations of intensity of Hawking radiation. In addition, it is noticed in \cite{Bueno:2017qce} that the temperature of the higher curvature corrected black hole is usually smaller, and the estimation of the order of the evaporation time (without taking into consideration the grey-body factors) shows that the lifetime is many orders higher when the higher curvature corrections are tuned on. This qualitatively agrees with our calculations which were limited by moderate values of the coupling constant and, nevertheless, detected strong suppression of Hawking radiation. \section{Discussion} In this work for the first time we have calculated quasinormal modes of the scalar, Dirac and electromagnetic fields in the background of the four-dimensional black hole in the Einsteinian cubic gravity. We also computed the grey-body factors for fields representing emission of photons, electrons, positrons and neutrinos. We have shown that: \begin{itemize} \item When the coupling constant $\lambda$ representing cubic correction to the Einstein term is increasing, the damping rate and the real oscillation frequencies are suppressed. \item The grey-body factors are larger for nonzero values of $\lambda$, which works for increasing the amount of radiation that will reach the observer. \item Despite such behavior of the grey-body factors, the temperature is falling when $\lambda$ is tuned on and the total energy emission rate for all the considered types of particles is decreased, which leads to the slower evaporation of the black hole. \item At moderate and large values of the coupling constant $\lambda$ the lifetime of the black hole is roughly proportional to $\lambda$. \end{itemize} There are a number of open questions which were beyond the scope of this publication. First of all, this concerns gravitational perturbations, which are important not only for estimating the constrains on higher curvature corrections from the observation of gravitational waves \cite{Blazquez-Salcedo:2016enn,Ayzenberg:2013wua,deRham:2020ejn} but also because they allow us to test the stability of the black hole \cite{Konoplya:2017lhs,Takahashi:2010ye,Cuyubamba:2016cug,Takahashi:2012np,Konoplya:2020juj}. The stability region is essential when higher curvature corrections are included, as we know from the example of various quadratic theories of gravity. Finally, the slowly rotating black hole which was announced recently in \cite{Adair:2020vso} deserves analysis of the quasinormal spectra and Hawking radiation. However, the separation of variables most probably will be impossible for this case. \acknowledgments{ R. K. and A. Z. would like to thanks Robert Mann for useful discussions and Robie Hennigar for sharing his numerical data given on fig. \ref{fig:a} with us. This work was supported by 19-03950S GAČR grant and the ``RUDN University Program 5-100''. A. F. Z. acknowledges the SU grant SGS/12/2019 and 19-03950S GAČR grant.}
1602.08663
\section{Truly multi-dimensional SL algorithm.} \label{sec2} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \subsection{Algorithm framework} \label{sec2.1} Our goal is to design a high order SL finite difference scheme for the VP system without operator splitting. Consider the VP system \eqref{eq: vlasov} with 1-D in $x$ and 1-D $v$. The 2-D ${x-v}$ plane is discretized into uniformly spaces rectangular meshes, \[ x_{\frac12} < x_{1+\frac12} < \cdots < x_{i+\frac12}<\cdots < x_{n_x+ \frac12}, \] \[ v_{\frac12} < v_{1+\frac12} < \cdots < v_{j+\frac12}<\cdots < v_{n_v+ \frac12}. \] The center of each of the rectangular cell $[x_{i-\frac12}, x_{i+\frac12}] \times [v_{j-\frac12}, v_{j+\frac12}]$ is denoted as $(x_i, v_j)$. We consider evolving the numerical solution $f^n_{i, j}$, $i = 1, \cdots n_x, j =1, \cdots, n_v$, where $f^n_{i, j}$ denotes the numerical solution at $(x_i, v_j)$ at the time level $t^n$. The proposed SL algorithm in updating the solution $f^{n+1}_{ij}$ consists of the following steps. \begin{enumerate} \item Characteristics are traced backward in time to $t^n$. Let the foot of the characteristic at the time level $t^n$ emanating from $(x_i, v_j)$ at $t^{n+1}$ be denoted as $(x^{\star}_i, v^{\star}_j)$. It is approximated by numerically solving the following final value problem \begin{equation} \label{eq: char} \left \{ \begin{array}{l} \frac{d{x}(t)}{dt} = {v(t)}, \\[2mm] \frac{d{v(t)}}{dt} = {E(x(t), t)},\\[2mm] x(t^{n+1}) = x_i, \\[2mm] v(t^{n+1}) = v_j. \end{array} \right. \end{equation} Here, we remark that solving \eqref{eq: char} with high order temporal accuracy is non-trivial. Especially, the electric field ${\bf E}$ depends on the unknown function $f$ via the Poisson's equation \eqref{eq: poisson} in a global rather than local fashion. Moreover, being a final value problem, the electrical field $E$ is known initially only at the time step $t^n$. In Section~\ref{sec2.2}, we discuss the proposed high order (up to third order) way of tracing characteristics in time. \item The solution is updated as \begin{equation} \label{eq: update} f^{n+1}_{i, j} = f(x^{n, (l)}_i, v^{n, (l)}_j, t^n) \approx f(x^{\star}_i, v^{\star}_j, t^n). \end{equation} We propose to recover $f(x^{n, (l)}_i, v^{n, (l)}_j, t^n)$ by a high order (up to sixth order) WENO interpolation from $f^n_{i, j}$, $i = 1, \cdots n_x, j =1, \cdots, n_v$. The procedures are discussed in Section~\ref{sec2.3}. \end{enumerate} \subsection{Tracing characteristics with high order temporal accuracy} \label{sec2.2} It is numerically challenging to design a one-step method to locate the foot of characteristics with high order accuracy in time. The electric field $E$ is not explicitly unknown; it is induced by the unknown function $f$ via the Poisson's equation \eqref{eq: poisson}. Since it is difficult to evaluate the electric field $E$ (r.h.s. of equation \eqref{eq: char}) for some intermedia time stages between $[t^n, t^{n+1}]$, Runge-Kutta methods can't be used directly. Below we describe our proposed predictor-corrector procedure for locating the foot of characteristics. We will first describe a first order scheme in tracing characteristics; the second scheme is built upon the first order prediction; and the proposed third order scheme is built upon the second order prediction. In our notations, the superscript $^n$ denotes the time level, the subscript $i$ and $j$ denote the location $x_i$ and $v_j$ in $x$ and $v$ directions respectively, the superscript $^{(l)}$ denotes the formal order of approximation. For example, in equation \eqref{eq: x_v_1} below, $x^{n, (1)}_{i}$ (or $v^{n, (1)}_{j}$) approximates $x_i^\star$ (or $v_j^\star$) with first order, and $E^n_i = E(x_i, t^n)$. $\frac{d}{dt} = \frac{\partial}{\partial t} + \frac{\partial x}{\partial t}\frac{\partial}{\partial x}$ denotes the material derivatives along characteristics. The order of approximation we mentioned in this subsection is for temporal accuracy. We propose to use a spectrally accurate fast Fourier transform (FFT) in solving the Poisson's equation \eqref{eq: poisson}, whose r.h.s. function $\rho(x, t) = \int f(x, v, t)dv$ is evaluated by a mid-point rule numercally. The mid point rule is of spectral accuracy given the function being integrated is either periodic or compactly supported \cite{boyd2001caf}. \bigskip \noindent \underline{\em \bf First order scheme.} We let \begin{equation} \label{eq: x_v_1} x^{n, (1)}_i = x_i - v_j \Delta t; \quad v^{n, (1)}_j = v_j - E^n_i \Delta t, \end{equation} which are first order approximations to $x_i^\star$ and $v_j^\star$, see Proposition~\ref{prop: order1} below. Let \begin{equation} \label{eq: f_1} f^{n+1, (1)}_{i, j} = f(x^{n, (1)}_i, v^{n, (1)}_j, t^n), \end{equation} which is a first order in time approximation to $f^{n+1}_{i, j}$. Note that the spatial approximation in equation \eqref{eq: f_1} (and in other similar equations in this subsection) is performed via high order WENO interpolation discussed in Section~\ref{sec2.3}. Based on $\{f^{n+1, (1)}_{i, j}\}$, we computed \[ \rho^{n+1, (1)}_i, \quad E^{n+1, (1)}_i \] by using a mid-point rule and FFT based on the Poisson's equation \eqref{eq: poisson}. Note that $\rho^{n+1, (1)}_i$ and $E^{n+1, (1)}_i$ also approximate $\rho^{n+1}_i$ and $E^{n+1}_i$ with first order temporal accuracy. \begin{prop} \label{prop: order1} $x^{n, (1)}_i$ and $v^{n, (1)}_j$ constructed in equation \eqref{eq: x_v_1} are first order approximations to $x_i^\star$ and $v_j^\star$ in time. \end{prop} \noindent {\em Proof.} By Taylor expansion, \begin{eqnarray} x_i^\star &=& x_i - \frac{d x_i}{dt}(x_i, v_j, {t^{n+1}}) \Delta t + \mathcal{O}(\Delta t^2) \nonumber\\ &=& x_i - v_j \Delta t + \mathcal{O}(\Delta t^2) \nonumber\\ &\stackrel{\eqref{eq: x_v_1}}{=}& x^{n, (1)}_i + \mathcal{O}(\Delta t^2), \nonumber \end{eqnarray} \begin{eqnarray} v_j^\star &=& v_j - \frac{d v_j}{dt}|_{t^{n+1}} \Delta t + \mathcal{O}(\Delta t^2) \nonumber\\ &=& v_j - E^{n+1}_i \Delta t + \mathcal{O}(\Delta t^2) \nonumber\\ &=& v_j - (E^{n}_i + \mathcal{O}(\Delta t)) \Delta t + \mathcal{O}(\Delta t^2) \nonumber\\ &\stackrel{\eqref{eq: x_v_1}}{=} & v^{n, (1)}_j + \mathcal{O}(\Delta t^2). \nonumber \end{eqnarray} Hence $x^{n, (1)}_i$ and $v^{n, (1)}_j$ are second order approximations to $x_i^\star$ and $v_j^\star$ locally in time for a time step; the approximation is of first order in time globally. We remark that the proposed first order scheme is similar to, but different from, the standard forward Euler or backward Euler integrator. It is specially tailored to the system \eqref{eq: char}. $\mbox{ }\rule[0pt]{1.5ex}{1.5ex}$ \bigskip \noindent \underline{\em \bf Second order scheme.} We let \begin{equation} \label{eq: x_v_2} x^{n, (2)}_i = x_i - \frac12 (v_j + v_j^{n, (1)}) \Delta t, \quad v^{n, (2)}_j = v_j - \frac12 (E(x_i^{n, (1)}, t^n) + E^{n+1, (1)}_i)\Delta t, \end{equation} which are second order approximations to $x_i^\star$ and $v_j^\star$, see Proposition~\ref{prop: order2} below. Note that $E(x_i^{n, (1)}, t^n)$ in equation \eqref{eq: x_v_2} can be approximated by WENO interpolation from $\{E^n_i\}_{i=1}^{n_x}$. Let $ f^{n+1, (2)}_{i, j} = f(x^{n, (2)}_i, v^{n, (2)}_j, t^n), $ approximating $f^{n+1}_{i, j}$ with second order in time. Based on $\{f^{n+1, (2)}_{i, j}\}$, we computed $ \rho^{n+1, (2)}_i, \quad E^{n+1, (2)}_i $ approximating $\rho^{n+1}_i$ and $E^{n+1}_i$ with second order temporal accuracy. \begin{prop} \label{prop: order2} $x^{n, (2)}_i$ and $v^{n, (2)}_j$ constructed in equation \eqref{eq: x_v_2} are second order approximations to $x_i^\star$ and $v_j^\star$ in time. \end{prop} \noindent {\em Proof.} It can be checked by Taylor expansion \begin{eqnarray} x_i^\star &=& x_i - \left(\frac{d x}{dt}(x_i, v_j, {t^{n+1}}) + \frac{d x}{dt}(x_i^\star, v_j^\star, t^{n})\right) \frac{\Delta t}{2} + \mathcal{O}(\Delta t^3) \nonumber\\ &=& x_i - \left(v_j^\star + v_j \right) \frac{\Delta t}{2} + \mathcal{O}(\Delta t^3) \nonumber\\ &\stackrel{Prop. \ref{prop: order1}}{=}& x_i - \left(v_j^{n, (1)} + \mathcal{O}(\Delta t^2) + v_j \right) \frac{\Delta t}{2} + \mathcal{O}(\Delta t^3) \nonumber\\ &=& x_i - \left(v_j^{n, (1)} + v_j \right) \frac{\Delta t}{2} + \mathcal{O}(\Delta t^3) \nonumber\\ &\stackrel{\eqref{eq: x_v_2}}{=}& x^{n, (2)}_i + \mathcal{O}(\Delta t^3). \nonumber \end{eqnarray} Similarly, \begin{eqnarray} v_j^\star &=& v_j - \left(E^{n+1}_i + E(x_i^\star, t^n)\right) \frac{\Delta t}{2} + \mathcal{O}(\Delta t^3) \nonumber\\ &\stackrel{Prop. \ref{prop: order1}}{=}& v_j - \left(E^{n+1, (1)}_i + E(x^{n, (1)}_i, t^n) + \mathcal{O}(\Delta t^2)\right)\frac{\Delta t}2 + \mathcal{O}(\Delta t^3) \nonumber\\ &\stackrel{\eqref{eq: x_v_2}}{=} & v^{n, (2)}_j + \mathcal{O}(\Delta t^3). \nonumber \end{eqnarray} Hence $x^{n, (2)}_i$ and $v^{n, (2)}_j$ are third order approximations to $x_i^\star$ and $v_j^\star$ locally in time for a time step; the approximation is of second order in time globally. Again the proposed second order scheme tailored to the system \eqref{eq: char} is similar to, but slightly different from, the second order Runge-Kutta integrator based on the trapezoid rule. $\mbox{ }\rule[0pt]{1.5ex}{1.5ex}$ \bigskip \noindent \underline{\em \bf Third order scheme.} We let \begin{equation} \label{eq: x_3} x^{n, (3)}_i = x_i - v_j \Delta t + \frac{\Delta t^2}2 (\frac23 E^{n+1, (2)}_i + \frac13 E(x_i^{n, (2)}, t^n)), \end{equation} \begin{eqnarray} \label{eq: v_3} v^{n, (3)}_j &=& v_j - E^{n+1, (2)}_i \Delta t + \frac{\Delta t^2}2 \left( \frac23 (\frac{d}{dt}E(x_i, t^{n+1}))^{(2)} + \frac13 \frac{d}{dt}E(x_i^{n, (2)}, t^n) \right), \end{eqnarray} which are third order approximations to $x_i^\star$ and $v_j^\star$, see Proposition~\ref{prop: order3} below. Note that $\frac{d}{dt}E$ terms on the r.h.s. of equation \eqref{eq: v_3} will be obtained by using the macro-equations described below. Let $ f^{n+1, (3)}_{i, j} = f(x^{n, (3)}_i, v^{n, (3)}_j, t^n), $ approximating $f^{n+1}_{i, j}$ with third order in time. Based on $\{f^{n+1, (3)}_{i, j}\}$, we computed $ \rho^{n+1, (3)}_i, \quad E^{n+1, (3)}_i $ approximating $\rho^{n+1}_i$ and $E^{n+1}_i$ with third order temporal accuracy. \begin{rem}We note that the mechanism to build this third order scheme is different from Runge-Kutta methods where intermedia stage solutions are constructed. It has some similarity in spirit to the Taylor-series (Lax-Wendroff type) method, where higher order time derivatives are recursively transformed into spatial derivatives. The difference with the Lax-Wendroff type time integration is: Lax-Wendroff method only uses spatial derivatives at one time level, while the proposed method used the spatial derivatives (or its high order approximations) at both $t^n$ and $t^{n+1}$ via a predictor-corrector procedure. In a sense, the proposed method is a two-stage multi-derivative method. \end{rem} With $ \frac{\partial E}{\partial x} = \rho-1$ from the Poisson's equation \eqref{eq: poisson}, to compute the Lagrangian time derivative along characteristics $\frac{d}{dt}E = \frac{\partial}{\partial t} + v \frac{\partial}{\partial x}$, we only need to numerically approximate $\frac{\partial E}{\partial t}$. Notice that if we integrate the Vlasov equation \eqref{eq: vlasov} over $v$, we have \begin{equation} \label{eq: moment0} \rho_t + J_x = 0, \end{equation} where $\rho(x, t)$ is the charge density and $J(x, t) = \int f v dv$ is the current density. With the Poisson's equation \eqref{eq: poisson}, and from eq.~\eqref{eq: moment0}, we have $ \frac{\partial}{\partial x} (E_t + J) =0, $ that is $E_t + J$ is independent of the spatial variable $x$. Thus \[ E_t + J = \frac1L \int (E_t + J(x, t)) dx = \frac1L \int J(x, t) dx, \] the last equality above is due to the periodic boundary condition of the problem. It can be shown, by multiplying the Vlasov equation \eqref{eq: vlasov} by $v$ and performing integration in both $x$- and $v$- directions, that \[ \frac{\partial}{\partial t} \int J(x, t) dx = 0, \] therefore \begin{equation} \frac{\partial}{\partial t} E(x, t) + J = \frac1L \int j(x, t=0) dx \doteq \bar{J^0}, \nonumber \end{equation} where $\bar{\cdot}$ denotes one's spatial average. Hence, \begin{equation} \label{eq: dt_E} \frac{d}{dt} E = (\frac{\partial}{\partial t} + v \frac{\partial}{\partial x}) E = \bar{J^0} - J(x, t) + v (\rho-1). \end{equation} Specifically, in equation \eqref{eq: v_3} \begin{eqnarray} (\frac{d}{dt}E(x_i, t^{n+1}))^{(2)} &=& \bar{J^0} - J^{n+1, (2)}_i + v_j (\rho^{n+1, (2)}_i-1), \nonumber\\ \frac{d}{dt}E(x_i^{n, (2)}, t^n) &=& \bar {J^0} - J (x_i^{n, (2)}, t^n) + v_j^{n, (2)} (\rho(x_i^{n, (2)}, t^n)-1). \nonumber \end{eqnarray} Note that $J^{n+1, (2)}_i$ and $J^{n}_i$ can be evaluated by mid-point rule from $\{f^{n+1, (2)}_{i, j} \}$ and $\{f^{n}_{i, j} \}$ respectively with spectral accuracy in space; while $J (x_i^{n, (2)}, t^n)$ can be numerically approximated by WENO interpolation from $J^n_i$. \begin{prop} \label{prop: order3} $x^{n, (3)}_i$ and $v^{n, (3)}_j$ constructed in equation \eqref{eq: x_3}-\eqref{eq: v_3} are third order approximations to $x_i^\star$ and $v_j^\star$ in time. \end{prop} \noindent {\em Proof.} It can be checked by Taylor expansion \begin{eqnarray} x_i^\star &=& x_i - \frac{d x}{dt}(x_i, v_j, {t^{n+1}}) {\Delta t} + \left(\frac23 \frac{d^2 x_i}{dt^2}(x_i, v_j, {t^{n+1}}) + \frac13 \frac{d^2 x_i}{dt^2}(x_i^\star, v_j^\star, {t^{n}})\right) \frac{\Delta t^2}{2} +\mathcal{O}(\Delta t^4) \nonumber\\ &=& x_i - v_j {\Delta t} + \left(\frac23 E^{n+1}_i + \frac13 E(x^\star_i, t^n)\right) \frac{\Delta t^2}{2} + \mathcal{O}(\Delta t^4) \nonumber\\ &\stackrel{Prop. \ref{prop: order2}}{=}& x_i - v_j {\Delta t} + \left(\frac23 E^{n+1, (2)}_i + \frac13 E(x_i^{n, (2)}, t^n) + \mathcal{O}(\Delta t^3)\right) \frac{\Delta t^2}{2} +\mathcal{O}(\Delta t^4) \nonumber\\ &\stackrel{\eqref{eq: x_3}}{=}& x^{n, (3)}_i + \mathcal{O}(\Delta t^4). \nonumber \end{eqnarray} Similarly, \begin{eqnarray} v_j^\star =&& v_j - E^{n+1}_i {\Delta t} + \left(\frac23 \frac{d E}{dt}(x_i, {t^{n+1}}) + \frac13 \frac{dE}{dt}(x_i^\star, {t^{n}})\right) \frac{\Delta t^2}{2} +\mathcal{O}(\Delta t^4) \nonumber\\ \stackrel{Prop. \ref{prop: order2}}{=}&& v_j - (E^{n+1, (2)}_i + \mathcal{O}(\Delta t^3)) {\Delta t} \nonumber\\ && + \left(\frac23 (\frac{d E}{dt}(x_i, {t^{n+1}}))^{(2)} + \frac13 \frac{dE}{dt}(x_i^{n, (2)}, {t^{n}})+ \mathcal{O}(\Delta t^3) \right) \frac{\Delta t^2}{2} + \mathcal{O}(\Delta t^4) \nonumber\\ \stackrel{\eqref{eq: v_3}}{=} && v^{n, (3)}_j + \mathcal{O}(\Delta t^4). \nonumber \end{eqnarray} Hence $x^{n, (3)}_i$ and $v^{n, (3)}_j$ are fourth order approximations to $x_i^\star$ and $v_j^\star$ locally in time for a time step; the approximation is of third order in time globally. $\mbox{ }\rule[0pt]{1.5ex}{1.5ex}$ \bigskip \noindent \underline{\em \bf Higher order extensions.} The procedures proposed above for locating the foot of characteristics can be extended to schemes with higher order temporal accuracy by using higher order version of Taylor expansion, e.g. as in equation~\eqref{eq: x_3} \eqref{eq: v_3}. As higher order material derivatives, e.g. $\frac{d^2}{dt^2}E$, are involved, a set of macro-equations from the Vlasov equation are needed. Specifically, we propose to multiply the Vlasov equation \eqref{eq: vlasov} by $v^k$, integrate over $v$ and obtain \[ \frac{\partial }{\partial t} M_k + \frac{\partial }{\partial x} M_{k+1} - k E M_{k-1} = 0, \] where $M_k(x, t) = \int f(x, v, t) v^k dv$. Especially, $M_0 = \rho(x, t)$ is the charge density and $M_1 = J(x, t)$ is the current density. When $k=0$, we have equation \eqref{eq: moment0}; When $k=1$, we have \begin{equation} \label{eq: moment1} \frac{\partial }{\partial t} J + \frac{\partial }{\partial x} M_2 - E \rho = 0. \end{equation} With these, we have \begin{eqnarray} \frac{d^2E}{dt^2} &\stackrel{\eqref{eq: dt_E}}{=}& (\frac{\partial }{\partial t} + v \frac{\partial }{\partial x}) (\bar{J^0} - J(x, t) + v (\rho-1)) \nonumber\\ \label{eq: dEdt2} &\stackrel{\eqref{eq: moment1}}{=}& v^2 \pad{\rho}{x} + \pad{M_2}{x} -2v\pad{J}{x} - E, \end{eqnarray} where spatial derivative terms can be evaluated by high order WENO interpolations or reconstructions. \subsection{High order WENO interpolations.} \label{sec2.3} In this subsection, we discuss the procedures in spatial interpolation to recover information among grid points, e.g. to update numerical solution by equation \eqref{eq: update}, and in spatial reconstruction to recover function derivatives at grid points, e.g. in computing spatial derivatives in equation ~\eqref{eq: dEdt2}. There have been a variety of interpolation choices, such as the piecewise parabolic method (PPM) \cite{colella1984piecewise}, spline interpolation \cite{crouseilles2007hermite}, cubic interpolation propagation (CIP) \cite{yabe2001cip}, ENO/WENO interpolation \cite{carrillo2007nim, Qiu_Shu2}. In our work we adapt the WENO interpolations. \bigskip \noindent \underline{\em \bf WENO interpolations.} High order accuracy is achieved by using several points in the neighborhood: the number of points used in the interpolation determines the order of interpolation. WENO \cite{Shu_book, carrillo2007nim, Qiu_Shu2}, short for `weighted essentially non-oscillatory', is a well-developed adaptive procedure to overcome Gibbs phenomenon, when the solution is under-resolved or contains discontinuity. Specifically, when the solution is smooth the WENO interpolation recovers the linear interpolation for very high order accuracy; when the solution is under-resolved, the WENO interpolation automatically assign more weights to smoother stencils. The smoothness of the stencil is measured by the divided differences of numerical solutions. Below we provide formulas for the sixth order WENO interpolations, which is what we used in our simulations. The sixth order WENO interpolation at a position $x\in [x_{i-1}, x_{i}]$ (or $\xi \doteq \frac{x-x_i}{\Delta x} \in [-1, 0]$) is obtained by \[ Q(\xi) = \omega_1 P_1(\xi) + \omega_2 P_2(\xi) + \omega_3 P_3(\xi), \] where \[ P_1(\xi) = (f_{i-3}, f_{i-2}, f_{i-1}, f_i) \, \left ( \begin{array}{llll} 0&-1/3&-1/2&-1/6\\ 0 & 3/2 & 2&1/2 \\ 0&-3&-5/2&-1/2\\ 1&11/6&1&1/6\\ \end{array} \right ) \, \left ( \begin{array}{l} 1\\ \xi\\ \xi^2\\ \xi^3 \end{array} \right ), \] \[ P_2(\xi) = (f_{i-2}, f_{i-1}, f_i, f_{i+1}) \, \left ( \begin{array}{llll} 0&1/6&0&-1/6\\ 0 & -1 & 1/2&1/2 \\ 1&1/2&-1&-1/2\\ 0&1/3&1/2&1/6 \end{array} \right ) \, \left ( \begin{array}{l} 1\\ \xi\\ \xi^2\\ \xi^3 \end{array} \right ), \] \[ P_3(\xi) = (f_{i-1}, f_i, f_{i+1}, f_{i+2}) \, \left ( \begin{array}{llll} 0&-1/3&1/2&-1/6\\ 1 & -1/2 & -1&1/2 \\ 0&1&1/2&-1/2\\ 0&-1/6&0&1/6 \end{array} \right ) \, \left ( \begin{array}{l} 1\\ \xi\\ \xi^2\\ \xi^3 \end{array} \right ). \] Linear weights \[ \gamma_1(\xi) = \frac{1}{20}(\xi-1)(\xi-2) , \quad \gamma_2(\xi) = -\frac{1}{10}(\xi+3)(\xi-2), \quad \gamma_3(\xi) = \frac{1}{20}(\xi+3)(\xi+2) . \] Nonlinear weights are chosen to be $$ \omega_m = \frac {\tilde{\omega}_m} {\sum_{l=1}^3 \tilde{\omega}_l},\qquad \mbox{with} \quad \tilde{\omega}_l = \frac {\gamma_l}{(\varepsilon + \beta_l)^2} , \quad l = 1, 2, 3, $$ where $\epsilon=10^{-6}$, and the smoothness indicators \begin{eqnarray} \beta_1 = -9\,f_{{i-3}}f_{{i-2}}+4/3\,{f_{{i-3}}}^{2}-11/3\,f_{{i-3}}f_{{i}}+10\,f_{{i-3}} f_{{i-1}}+14\,f_{{i-2}}f_{{i}}\nonumber\\ +22\,{f_{{i-1}}}^{2}-17\,f_{{i-1}}f_{{i}}+10/3\, {f_{{i}}}^{2}+16\,{f_{{i-2}}}^{2}-37\,f_{{i-2}}f_{{i-1}},\nonumber \end{eqnarray} \begin{eqnarray} \beta_2 = -7\,f_{{i-2}}f_{{i-1}}+4/3\,{f_{{i-2}}}^{2}-5/3\,f_{{i-2}}f_{{i+1}}+6\,f_{{i-2}}U_ {{i}}+6\,f_{{i-1}}f_{{i+1}}\nonumber\\ +10\,{f_{{i}}}^{2}-7\,f_{{i}}f_{{i+1}}+4/3\,{f_{{ 4}}}^{2}+10\,{f_{{i-1}}}^{2}-19\,f_{{i-1}}f_{{i}},\nonumber \end{eqnarray} \begin{eqnarray} \beta_3 = -17\,f_{{i-1}}f_{{i}}+10/3\,{f_{{i-1}}}^{2}-11/3\,f_{{i-1}}f_{{i+2}}+14\,f_{{i-1 }}f_{{i+1}}+10\,f_{{i}}f_{{i+2}}\nonumber\\ +16\,{f_{{i+1}}}^{2}-9\,f_{{i+1}}f_{{i+2}}+4/3\, {f_{{i+2}}}^{2}+22\,{f_{{i}}}^{2}-37\,f_{{i}}f_{{i+1}}.\nonumber \end{eqnarray} \subsection{Computational cost and savings} One of the procedures in the proposed algorithm that takes up much computational time is to trace the foot of characteristics. Assume $N = n_x = n_v$, the scheme involves solving the Poisson's equation via FFT with the cost on the order of $N log(N)$ and a high order 2-D WENO interpolation on the order of $C N^2$, where the constant $C$ is larger when the order of interpolation is higher. Since the 2-D WENO interpolation (compared with the 1-D Poisson solver) is a procedure that takes most of the computational time, we will use the number of 2-D WENO interpolations as a measurement of computational cost. For the first order scheme \eqref{eq: x_v_1}, there is a high order 2-D WENO interpolation involved. The proposed second order scheme \eqref{eq: x_v_2} is based on the first order prediction: two high order 2-D WENO interpolations are involved. This leads to twice the computational cost as a first order scheme. The third order scheme \eqref{eq: x_3} - \eqref{eq: v_3} is based on the second order prediction: three high order 2-D WENO interpolations are involved. We claim that proposed high order procedures are computationally efficient: the computational cost roughly grows linearly with the order of approximation. To further save some computational cost, we propose to use lower order 2-D WENO interpolation in the prediction steps. Specifically, in the third order scheme \eqref{eq: x_3} - \eqref{eq: v_3}, we propose to use a second order 2-D WENO interpolation in the first order prediction, use a fourth order 2-D WENO interpolation in the second order prediction, and use a sixth order 2-D WENO interpolation in the final step of updating. \section{Conclusion} \label{sec5} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} In this paper, we propose a systematical way of tracing characteristics for a one-dimensional in space and one-dimensional in velocity Vlasov-Poisson system with high order temporal accuracy. Based on such mechanism, a finite difference grid-based semi-Lagrangian approach coupled with WENO interpolation is proposed to evolve the system. It is numerically demonstrated that schemes with higher order of temporal accuracy perform better in many aspects than the first order one. Designing mass conservative semi-Lagrangian schemes, yet not subject to time step constraints, is considered to be challenging and is subject to future research investigations. \subsection{Discussion on mass conservative correction and stability} \label{sec3} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} The proposed scheme is non mass conservative. One possible remedy is a conservative correction procedure, that allows the construction of a conservative scheme starting from a non conservative one. This approach was first introduced in the context if the BGK model of rarefied gas dynamics by P.~Santagati in his PhD thesis \cite{Santagati07}, and illustrated in a preprint \cite{Russo-Santagati-BGK-11}. Take a simple linear convection equation in one space dimension for example, the equation will take the form \begin{equation} \pad{f}{t} + \pad{f}{x} = 0, \quad f(x,0) = f^0(x), \label{eq:scalar2} \end{equation} with periodic boundary conditions. \rf{eq:scalar2} is discretized on a spatial grid, $x_i = i\Delta x$, $i=1\ldots,n$. Following Osher and Shu \cite{ShuOsherEfficient}, we impose that the pointwise value $f_i^n\approx f(x_i,t^n)$ satisfies the equation \[ f^{n+1}_i - f^n_i = -\frac{\hat{F}_{j+1/2}-\hat{F}_{j-1/2}}{\Delta x}, \] where the function $\hat{F}$ is reconstructed at the edge of the cell from the point wise values of $F(x_i) = \int_{t^n}^{t^{n+1}} f(x_i, \tau)d\tau$ in the same way pointwise values of a function $u(x\pm\Delta x)$ can be reconstructed from cell average $\bar{u}_i$, see \cite{Jiang_Shu} for a detailed description of the WENO reconstruction procedure. Let $(c_\ell,b_\ell)$, $\ell = 1,\ldots,s$ be the nodes and weights of an accurate quadrature formula in the interval $[0,1]$. To approximate $F(x_i)$, one can use a quadrature rule \[ F(x_i) \approx \Delta t \sum_{\ell=1}^{s} b_\ell f(x_i, t^n + c_\ell \Delta t), \] where $f(x_i, t^n + c_\ell \Delta t)$ can be obtained by the characteristics tracing as well as WENO interpolation described earlier this section. Such procedure can be directly extended to two dimensional problem, including the Vlasov-Poisson procedure, where the non-conservative semi-Lagrangian method previously proposed can be used to get the solution at quadrature points. The 2-point Gauss-Legendre quadrature formula with $b_1 = b_2 = 1$ and $c_{1,2}= \frac{1}{2}\pm\frac{1}{2\sqrt{3}}$ is found to be a good choice with good stability property. On the other hand, such conservative correction is subject to a time step constraint related to the spatial mesh size similar to that of the Eulerian approach from spatial interpolation and reconstruction procedures. As a result, the advantage of using larger time steps in a SL method is lost. To investigate and improve such stability constraint is subject to our future research. \section{Introduction} \label{sec1} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} This paper focuses on a high order truly multi-dimensional semi-Lagrangian (SL) approach for the Vlasov-Poisson (VP) simulations. Arising from collisionless plasma applications, the VP system, \begin{equation} \frac{\partial f}{\partial t} + {\bf v} \cdot \nabla_{\bf x} f + \mathbf{E}({\bf x},t) \cdot \nabla_{\bf v} f = 0, \label{eq: vlasov} \end{equation} and \begin{equation} \mathbf{E}(\mathbf{x},t)=-\nabla_{\bf x}\phi(\mathbf{x},t),\quad -\Delta_{\bf x}\phi(\mathbf{x},t)=\rho(\mathbf{x},t)-1,\label{eq: poisson} \end{equation} describes the temporal evolution of the particle distribution function in six dimensional phase space. $f( {\bf x},{\bf v},t)$ is probability distribution function which describes the probability of finding a particle with velocity $\bf{v}$ at position $\bf{x}$ at time $t$, $\bf{E}$ is the electric field, and $\phi$ is the self-consistent electrostatic potential. The probability distribution function couples to the long range fields via the charge density, $\rho(t,x) = \int_{\mathbb{R}^3} f(x,v,t)dv$, where we take the limit of uniformly distributed infinitely massive ions in the background. In this paper, we consider the VP system with 1-D in ${\bf x}$ and 1-D in ${\bf v}$. Many different approaches have been proposed for the VP simulations. There are the Lagrangian particle-in-cell (PIC) methods, which have been very popular in practical high dimensional simulations due to its relatively low computational cost \cite{friedman1991multi, jacobs2009implicit, heikkinen2008full}. However, the Lagrangian particle approach is known to suffer the statistical noise which is of order $1/\sqrt{N}$, where $N$ is the number of particles in a simulation. There are very high order Eulerian finite difference \cite{zhou2001numerical}, finite volume \cite{banks2010new}, finite element discontinuous Galerkin method \cite{heath2010discontinuous, cheng2011positivity}. Eulerian methods can be designed to be highly accurate in both space and in time, thus being able to resolve complicated solution structures in a more efficient manner by using a set of relatively coarse numerical mesh. However, they are subject to CFL time step restrictions. There are the dimensional split SL approach originally proposed in \cite{cheng}, and further developed in the finite volume \cite{FilbetSB, sonnendruecker, begue1999two, besse2003semi, crouseilles2010conservative}, finite difference \cite{carrillo2007nim, Qiu_Christlieb, Qiu_Shu2}, finite element discontinuous Galerkin framework \cite{Qiu_Shu_DG,rossmanith2011positivity} and a hybrid finite different-finite element framework \cite{Guo_Qiu}. The semi-Lagrangian framework allows for extra large numerical time steps compared with Eulerian approach, leading to some savings in computational cost. The dimensional splitting allows for a very simple implementation procedure for tracing characteristics; however it causes a second order operator splitting error in time. For convergence estimate for the semi-Lagrangian methods for the VP simulations, we refer to \cite{charles2013enhanced}. If the splitting is not performed properly, numerically instabilities are observed \cite{huot2003instability}. In \cite{Guo_Qiu2}, an integral deferred correction method is proposed for the dimensional split SL approach to reduce the splitting error. In this paper, we proposes a high order truly multi-dimensional SL finite difference approach for solving the VP system. The `truly multi-dimensional' means that no operator splitting is involved. The difficulty is the tracing of characteristics with high order temporal accuracy in a time step. Especially the evolution of characteristics is due to the electric field induced by the unknown particle distribution function $f$ in the Vlasov equation \eqref{eq: vlasov}. A high order two-stage multi-derivative predictor-corrector algorithm is proposed to build up a high order characteristic-tracing algorithm based on lower order ones, with the help of moment equations of the VP system. A high order WENO interpolation is proposed to recover information among grid points. The proposed algorithm is of high order accuracy in both space and in time. However, there is no mass conservation. We discuss such issues as well as the computational cost of the proposed algorithm. The paper is organized as follows. Section~\ref{sec2} describes the high order SL finite difference approach without operator splitting. High order way of tracing characteristics are proposed and analyzed. Issues related to computational cost and mass conservation are discussed. Section~\ref{sec4} presents numerical simulation results. Finally, the conclusion is given in Section~\ref{sec5}. \section{Numerical tests: the Vlasov-Poisson system} \label{sec4} In this section, we examine the performance of the proposed fully multi-dimensional semi-Lagrangian method for the VP systems. Periodic boundary condition is imposed in x-direction, while zero boundary condition is imposed in v-direction. We recall several norms in the VP system below, which should remain constant in time. \begin{enumerate} \item $L^p$ norm $1\leq p<\infty$: \begin{equation} \|f\|_p=\left(\int_v\int_x|f(x,v,t)|^pdxdv\right)^\frac1p. \end{equation} \item Energy: \begin{equation} \text{Energy}=\int_v\int_xf(x,v,t)v^2dxdv + \int_xE^2(x,t)dx, \end{equation} where $E(x,t)$ is the electric field. \item Entropy: \begin{equation} \text{Entropy}=\int_v\int_xf(x,v,t)\log(f(x,v,t))dxdv. \end{equation} \end{enumerate} Tracking relative deviations of these quantities numerically will be a good measure of the quality of numerical schemes. The relative deviation is defined to be the deviation away from the corresponding initial value divided by the magnitude of the initial value. In our numerical tests, we let the time step size $\Delta t = CFL \cdot \min(\Delta x/v_{max}, \Delta v/\max(E))$, where $CFL$ is specified for different runs; and let $v_{max} = 6$ to minimize the error from truncating the domain in $v$-direction. We first present the example of two stream instability. In this example, we will demonstrate the (1) high order spatial accuracy and the high order temporal accuracy of the proposed schemes; (2) the time evolution of overall mass and other theoretically conserved physical norms for the proposed method; (3) the performance of the proposed method in resolving solution structures. \begin{exa} Consider two stream instability \cite{FilbetS}, with an unstable initial distribution function: \begin{equation} f(x,v,t=0)=\frac{2}{7\sqrt{2\pi}}(1+5v^2)(1+\alpha((\cos(2kx)+\cos(3kx))/1.2+ \cos(kx))\exp(-\frac{v^2}{2}) \end{equation} with $\alpha=0.01$, $k=0.5$, the length of the domain in the x direction is $L=\frac{2\pi}{k}$ and the background ion distribution function is fixed, uniform and chosen so that the total net charge density for the system is zero. \end{exa} We test both spatial and temporal convergence of the proposed truly multi-dimensional semi-Lagrangian method. We first test the spatial convergence by using a sequence of meshes with $n_x = n_v = \{210, 126, 90, 70\}$. The meshes are designed so that the coarse mesh grid coincides with part of the reference fine mesh grid ($n_x=n_v=630$). We set $CFL=0.01$ so that the spatial error is the dominant error. Table~\ref{tab: spa_2stream} is the spatial convergence table for the proposed schemes with sixth order WENO interpolation. The expected fifth order convergence globally in time in observed. We then test the temporal convergence of the proposed first, second and third order schemes. Table~\ref{tab: tem_2stream} provides the temporal convergence rate for the scheme with the first to third order temporal accuracy. We use the sixth order WENO interpolation and a spatial mesh of $Nx = Nv=160$, so that the temporal error is the dominant error. Expected first, second and third order temporal accuracy is observed. In Table~\ref{tab: tem_2stream}, the time step size is about $6$ to $10$ times that from an Eulerian method, yet highly accurate numerical results is achieved. To compare the performance of schemes with different temporal orders, we numerically track the time evolution of physically conserved quantities of the system. In our runs, we let $n_x=n_v=128$, $CFL=5$. In Figure~\ref{fig: 2stream_norm}, the time evolution of numerical $L^1$ norm, $L^2$ norm, energy and entropy for schemes with different orders of temporal accuracy are plot. In general, high order temporal accuracy indicates a better preservation of those physically conserved norms. The $L^1$ norm is not conserved since our scheme is neither mass conservative nor positivity preserving. In Figure \ref{fig: 2stream}, we show the contour plot of the numerical solution of the proposed SL WENO method with third order temporal accuracy at around $T=53$. The plot is comparable to our earlier work reported in \cite{Qiu_Christlieb, Qiu_Shu2}. \begin{table}[htb] \begin{center} \bigskip \begin{tabular}{|c | c c|} \hline \cline{1-3} $Nx \times Nv$ &$L^1$ error & order \\ \hline {$70\times 70$} &7.01E-7 & -- \\ \hline {$90\times 90$} &2.06E-7&4.88\\ \hline {$126\times 126$} &3.96E-8&4.89\\ \hline {$210\times 210$} &3.20E-9&4.95\\ \hline \end{tabular} \end{center} \caption{Order of accuracy in space for the SL WENO schemes: two stream instability. The scheme use sixth order WENO interpolation and has a third order temporal accuracy in tracing characteristics. $T=1$ and $CFL=0.01$.} \label{tab: spa_2stream} \end{table} \begin{table}[htb] \begin{center} \bigskip \begin{tabular}{|c | c c|c c|c c|} \hline \cline{1-7} &\multicolumn{2}{c|}{first order} &\multicolumn{2}{c|}{second order} &\multicolumn{2}{c|}{third order} \\ \hline \cline{1-7} $CFL$& $L^1$ error&order& $L^1$ error&order& $L^1$ error&order\\ \hline 6 & 1.17E-4& -- &2.40E-6 & -- & 1.13E-7&--\\ \hline 7 & 1.40E-4&1.13 & 2.80E-6 & 2.04 & 1.79E-7&3.02\\ \hline 8 & 1.63E-4& 1.16 & 3.69E-6 &2.07 & 2.69E-7&3.02\\ \hline 9 & 1.87E-4& 1.16 & 4.69E-6 &2.04 & 3.84E-7&3.03\\ \hline 10 & 2.12E-4& 1.20 & 5.84E-6 &2.08 & 5.31E-7&3.06\\ \hline \end{tabular} \end{center} \caption{Order of accuracy in time for the SL WENO schemes with sixth order WENO interpolation and various orders of temporal accuracy. Two stream instability. $Nx = Nv=160$ and $T=5$.} \label{tab: tem_2stream} \end{table} \begin{figure}[htb] \begin{center} \includegraphics[height=2.2in,width=3.0in]{./2stream_l1} \includegraphics[height=2.2in,width=3.0in]{./2stream_l2}\\ \includegraphics[height=2.2in,width=3.0in]{./2stream_energy} \includegraphics[height=2.2in,width=3.0in]{./2stream_entropy} \end{center} \caption{Two stream instability. The SL WENO scheme with sixth order WENO interpolation in space and various orders of temporal accuracy. Time evolution of the relative deviations of discrete $L^1$ norms (upper left), $L^2$ norms, kinetic energy norms (lower left) and entropy (lower right).} \label{fig: 2stream_norm} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[height=3.2in,width=4.0in]{./contour_f_T53_order3} \end{center} \caption{Two stream instability: $T=53$. The SL WENO scheme with the sixth order WENO interpolation and a third order temporal accuracy. The spatial mesh is $128 \times 128$ and $CFL=5$.} \label{fig: 2stream} \end{figure} \begin{exa} Consider weak Landau damping for the Vlasov-Poisson system with initial condition: \begin{equation} \label{landau} f(x,v,t=0)=\frac{1}{\sqrt{2\pi}}(1+\alpha\cos(kx))\exp(-\frac{v^2}{2}), \end{equation} where $\alpha=0.01$. When the perturbation magnitude is small enough ($\alpha=0.01$), the VP system can be approximated by linearization around the Maxwellian equilibrium $f^0(v)=\frac{1}{\sqrt{2\pi}}e^{-\frac{v^2}{2}}$. The analytical damping rate of electric field can be derived accordingly \cite{fried1961plasma}. We test the numerical numerical damping rates with theoretical values. We only present the case of $k=0.5$. The spatial computational grid has $n_x=n_v=128$ and $CFL=5$. For the scheme with first, second and third order accuracy in time and sixth order WENO interpolation in space, we plot the evolution of electric field in $L^2$ norm benchmarked with theoretical values (solid black lines in the figure) in Figure \ref{fig403}. A better match with the theoretical decay rate of the electric field is observed for schemes with second and third order temporal accuracy. The time evolution of discrete $L^1$ norm, $L^2$ norm, kinetic energy and entropy of schemes with different temporal orders are reported in Figure \ref{fig404}. $L^1$ and $L^2$ norms are better preserved by schemes with higher order temporal accuracy. Note that the mass is not exactly preserved. Energy and entropy are better preserved by schemes with second and third order accuracy than that with first order accuracy. \begin{figure} \begin{center} \includegraphics[height=2.2in,width=3.0in]{./weak_damping} \end{center} \caption{Weak Landau damping. Time evolution of electric field in $L^2$ norm.} \label{fig403}. \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[height=2.2in,width=3.0in]{./weak_damping_l1} \includegraphics[height=2.2in,width=3.0in]{./weak_damping_l2}\\ \includegraphics[height=2.2in,width=3.0in]{./weak_damping_energy} \includegraphics[height=2.2in,width=3.0in]{./weak_damping_entropy} \end{center} \caption{Weak Landau damping. The proposed SL WENO scheme with first, second and third order accuracy in time and sixth order WENO interpolation in space. Time evolution of the relative deviations of discrete $L^1$ norms (upper left), $L^2$ norms, kinetic energy norms (lower left) and entropy (lower right).} \label{fig404} \end{figure} \end{exa} \begin{exa} Consider strong Landau damping. The initial condition is equation \eqref{landau}, with $\alpha=0.5$ and $k=0.5$. The evolution of $L^2$ norms of electric field is provided in Figure \ref{fig405}, which is comparable to existing results in the literature, e.g. see \cite{Guo_Qiu}. The time evolution of discrete $L^1$ norm, $L^2$ norm, kinetic energy and entropy are reported in Figure \ref{fig407}. The $L^1$ norm, as expected, is not conservative. Numerical solutions of the proposed scheme at different times are observed to be comparable to those that have been well reported in the literature, e.g. \cite{Qiu_Christlieb, Guo_Qiu} among many others. Thus we omit to present those figures to save space. \begin{figure} \begin{center} \includegraphics[height=2.2in,width=3.0in]{./strong_damping} \end{center} \caption{Strong Landau damping. Time evolution of electric field in $L^2$ norm.} \label{fig405}. \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[height=2.2in,width=3.0in]{./strong_damping_l1} \includegraphics[height=2.2in,width=3.0in]{./strong_damping_l2}\\ \includegraphics[height=2.2in,width=3.0in]{./strong_damping_energy} \includegraphics[height=2.2in,width=3.0in]{./strong_damping_entropy} \end{center} \caption{Strong Landau damping. The SL WENO scheme with sixth order WENO interpolation in space and various orders of temporal accuracy. Time evolution of the relative deviations of discrete $L^1$ norms (upper left), $L^2$ norms, kinetic energy norms (lower left) and entropy (lower right).} \label{fig407} \end{figure} \end{exa} \begin{exa} Consider the symmetric two stream instability \cite{banks2010new}, with the initial condition \begin{equation} f(x,v,t=0)=\frac{1}{\sqrt{8\pi}v_{th}}\left[\exp\left(-\frac{(v-u)^2}{2v_{th}^2}\right)+\exp\left(-\frac{(v+u)^2}{2v_{th}^2}\right)\right]\big (1+0.0005\cos(kx)\big ) \end{equation} with $u=5\sqrt{3}/4$, $v_{th}=0.5$ and $k=0.2$. The background ion distribution function is fixed, uniform and chosen so that the total net charge density for the system is zero. Figure~\ref{fig: 2stream2_E} plots the evolution of electric fields for the proposed scheme benchmarked with a reference rate from linear theory $\gamma = \frac{1}{\sqrt{8}}$, see \cite{banks2010new}. Theoretical consistent results are observed. Time evolution of discrete $L^1$ norm, $L^2$ norm, kinetic energy and entropy of schemes with different temporal orders are reported in Figure \ref{fig: 2stream2_norm}. Again, higher order schemes in general perform better in preserving the conserved physical quantities than low order ones. In Figures \ref{fig: 2stream2}, we report numerical solutions from the SL WENO schemes with various temporal accuracy in approximating the distribution solution $f$. It can be observed that, with the same time step size, the higher order schemes (e.g. second and third order ones) perform better than a first order one. \begin{figure} \begin{center} \includegraphics[height=2.2in,width=3.0in]{./2stream2} \end{center} \caption{Symmetric two stream instability: time evolution of electric field in $L^2$ norm. The SL WENO scheme with sixth order WENO interpolation in space and various orders of temporal accuracy. } \label{fig: 2stream2_E}. \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[height=2.2in,width=3.0in]{./2stream2_l1} \includegraphics[height=2.2in,width=3.0in]{./2stream2_l2}\\ \includegraphics[height=2.2in,width=3.0in]{./2stream2_energy} \includegraphics[height=2.2in,width=3.0in]{./2stream2_entropy} \end{center} \caption{Two stream instability. The SL WENO scheme with sixth order WENO interpolation in space and various orders of temporal accuracy. Time evolution of the relative deviations of discrete $L^1$ norms (upper left), $L^2$ norms, kinetic energy norms (lower left) and entropy (lower right).} \label{fig: 2stream2_norm} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[height=3.2in,width=3.0in]{./contour_f_order1} \includegraphics[height=3.2in,width=3.0in]{./contour_f_order1_rerefine} \\ \includegraphics[height=3.2in,width=3.0in]{./contour_f_order2} \includegraphics[height=3.2in,width=3.0in]{./contour_f_order3} \end{center} \caption{Symmetric two stream instability: $T=50$. Results from schemes with first order temporal accuracy with $CFL=5$ (upper left), $CFL=0.1$ (upper right). Results from schemes with second order temporal accuracy (lower left) and third order temporal accuracy (lower right) and $CFL=5$.} \label{fig: 2stream2} \end{figure} \end{exa}
hep-ph/9408203
\section{Introduction} Heavy quark effective theory (HQET) has become an established tool in the phenomenology of heavy flavours, with applications expanding from their original premises of exclusive heavy flavour transitions into inclusive decays and heavy quark fragmentation\cite{1}. On the theory side, HQET provides a major example for the framework of effective field theories (EFT). Thus, given a Green function with heavy external momenta $p_i=m_Q v+k_i$ and light momenta $q_j$, the ``matching procedure'' extracts the heavy mass dependence in the form \begin{equation} \label{eq1} G_{QCD}(p_i,q_j;m_Q;\alpha) = \sum_l\frac{1}{m_Q^l} \,C_{(l)}\!\left(\frac{m_Q}{\mu};\alpha\right) G^{(l)}_{ HQET}\left(k_i,q_j;\mu;\alpha\right) , \end{equation} when the $p_i$ are close to their mass-shell. The validity of Eq.~(\ref{eq1}) is established inductively in the number of loops -- i.e. powers of the coupling $\alpha$ -- and to all orders in $l$, where $\alpha$ and the small off-shellness $|k_i|/m_Q$ are independent parameters. In practice, one is not so much interested in the matching of Green functions as in matrix elements between heavy hadrons. In this case the scale of the off-shellness is provided by the theory itself, $|k_i|\sim \Lambda_{QCD}$. Since $\Lambda_{QCD}/m_Q\sim \exp(1/(2\beta_0\alpha(m_Q)))$, the series in power corrections and the number of loops in the analogue of (\ref{eq1}) are organized in terms of a single parameter $\alpha(m_Q)$. Leaving the safe grounds of perturbation theory, one should discuss the presence of power corrections simultaneously with large order, $\alpha^n$, matching corrections to the coefficient functions $C_{(l)}$. In fact, the series of these corrections diverges as $n\rightarrow\infty$ and one source of divergence originates from low momentum regions, which one would like to factor into nonperturbative parameters that appear in power corrections. The summation of the divergent series introduces ambiguities of the same order of magnitude as these nonperturbative parameters, which must therefore also be ambiguous. This divergence pattern -- known as renormalons -- and its consequences have a long history\cite{2} in the context of the short-distance expansion and QCD sum rules. In this talk I discuss the renormalon phenomenon in HQET and its (ir)relevance for phenomenology\cite{3,4}. \section{Renormalon Structure of HQET} Investigations of large orders in perturbation theory naturally have to resort to some kind of approximation. Since renormalons are associated with the integration over logarithms provided by vacuum polarizations, some insight can be obtained from the restriction to the class of diagrams generated from insertion of a chain of fermion bubbles into the low order diagrams. Taking Borel transforms and defining $u=-\beta_0 t$, factorial divergence of perturbative series in $\alpha$ translates into singularities of their Borel transforms in $u$. Singularities at positive $u$ imply non-Borel summability and an ambiguity in the definition of the sum of the original divergent series. In the following, $\overline{MS}$ renormalization in QCD and HQET will be assumed, though $m_Q$ need -- and should -- not coincide with the renormalized mass $m$ of the heavy quark. The general structure of the Borel transformed version of Eq.~(\ref{eq1}) can be described as follows: The Green functions $G^{(l)}_{HQET}$ in HQET (with operator insertions) are power-like divergent. Explicit power divergences are absent in dimensional regularization, but they do not disappear without a trace in $\overline{MS}$. Subtractions are such that they leave divergent series expansions with non-summable ultraviolet (UV) renormalons at positive half-integer $u$. It is natural to associate this divergence with integration over large internal momenta, $k\gg \mu$, though not straightforward, because integrals are defined by analytic continuation. The coefficient functions, $C_{(l)}$, have singularities at positive half-integers, too, which stem from small internal momenta, $k\ll \mu$. Infrared (IR) renormalons in coefficient functions cancel with the UV renormalons -- up to the singularities present already on the l.h.s. of Eq.~(\ref{eq1}). This cancellation takes place over different orders in the expansion in $1/m_Q$. Thus, if this expansion is truncated at a certain order, summation of the perturbative series produces an ambiguous result, which is removed only by including higher orders in $1/m_Q$. As an illustration, consider the matching of the inverse propagator of a heavy quark within the above approximation. The first two terms of the heavy quark expansion are given by \begin{equation} \label{eq2} S^{-1}(p,m;u) = m_Q\left(\frac{m}{\mu};u\right) - m_{pole}\left(\frac{m}{\mu};u\right)+ C\left(\frac{m_Q}{\mu};u \right) \star \left(vk\delta(u)-\Sigma_{eff}(vk;u)\right) +\ldots \end{equation} The explicit expressions for the ingredients of Eq.~(\ref{eq2}) lead to the conclusions\cite{3}: (a) The pole mass of the heavy quark in the first term on the r.h.s. has an IR renormalon\cite{5} at $u=1/2$ when expressed in terms of $m$. Thus, the pole mass can not be defined to an accuracy better then $\Lambda_{QCD}$. While this might be expected, the interesting point is that perturbation theory itself indicates its failure through its divergence. As a consequence, when the heavy quark expansion is applied to hadronic parameters, the quantity $\Lambda_{H_Q} \equiv m_{H_Q}-m_{pole}$, defined as the difference between the heavy hadron mass and the pole mass of the quark in the heavy quark limit, contains an ambiguity of order $\Lambda_{QCD}$. Note, however, that this ambiguity, though of the same order of magnitude as $\Lambda_{H_Q}$ itself, is not related to bound state effects contained in $\Lambda_{H_Q}$, but can be traced to the long range part of the Coulomb field of the quark. Thus, the effect is universal and obviously cancels in mass differences. (b) Off mass-shell, the l.h.s. of Eq.~(\ref{eq2}) is non-singular at $u=1/2$. As anticipated from the general discussion, for $k\not=0$ the IR renormalon at $u=1/2$ in the pole mass cancels exactly against an UV renormalon at this position in the self-energy of the static quark, $\Sigma_{eff}$, computed from the leading term in the HQET Lagrangian, $\bar{h}_v v\cdot D h_v$. This UV renormalon arises, since, in contrast with full QCD, the self-energy of the static quark is linearly UV divergent. This is nothing but the linear divergence of a static point charge known from classical electrodynamics, which reappears in HQET, where the quark mass is considered larger than the UV cutoff. (c) To reproduce the r.h.s. of Eq.~(\ref{eq2}) from HQET without a residual mass term, the first term must vanish and the expansion parameter $m_Q$ has to coincide with the pole mass. This destroys artifically the cancellation of renormalon poles, part of which become hidden in the expansion parameter, which then is not defined beyond perturbation theory. {}From this point of view it is conceptually advantageous to use the freedom to add a small residual mass term $-\delta m \bar{h}_v h_v$ to the effective Lagrangian, such that both $\Sigma_{eff}$ computed from HQET with residual mass and the expansion parameter $m_Q=m_{pole}-\delta m$ are formally free from an ambiguity due to a renormalon at $u=1/2$. This can be accomplished either by $\delta m\propto \mu \sum c_n\alpha(\mu)^n$ $(\mu\ll m_Q)$ with $c_n$ adjusted to the UV renormalon divergence or a formally ambiguous $\delta m\propto \mu\exp(1/(2\beta_0\alpha(\mu)))$, adjusted to compensate the ambiguities of the Borel sums. \vspace*{0.1cm} \section{Implications} Exclusive heavy flavour decays are governed by matrix elements of the weak current between heavy hadron states. HQET is particularly effective in restricting the number of independent form factors in the infinite mass limit and parameterizing the corrections to this limit. These corrections involve new nonperturbative form factors and typically the ratio $\Lambda_{H_Q}/m_Q$, which controls the size of these corrections. Since physical quantities must be unambiguous, the ambiguity in the definition of $\Lambda_{H_Q}$ implies that the matching corrections in leading order diverge (with an IR renormalon at $u=1/2$), such that the ambiguity of their sum compensates the ambiguity in $\Lambda_{H_Q}$, which has been inferred from the pole mass. It is easy to see that an IR renormalon at $u=1/2$ will indeed occur. The leading order matching corrections are conveniently calculated by comparing the current insertions between on-shell quark states in the full and the effective theory. In the IR, the integrals behave like $\mbox{d}^4 k/k^4$, but the coefficient is the same in the full and the effective theory and the logarithmic IR divergence cancels as it must be. The next term in the expansion for small $k$, $\mbox{d}^4 k/k^3$, is different, however. Although this region gives a small and finite contribution to the coefficient function in first order, it is greatly amplified by large powers of logarithms, $\ln^n k^2/\mu^2$, in higher orders, which produces the required divergence of the series. Thus, the structure of the heavy quark expansion is conceptually quite similar to the short distance expansion\cite{2}. Power corrections must be added with care, since the summation of perturbative corrections, which is never performed in practice, can produce effects of the same order. In the particular case of $\Lambda_{H_Q}$ the situation might be more favorable phenomenologically. This parameter contains the effect of the light spectators in the heavy hadron, which appears first at this order and which is clearly not related to renormalon ambiguities. Given the large value $\Lambda_P\approx 500\,$MeV, favored for pseudoscalar mesons, one may argue that it is dominated by the spectator and renormalon effects may be neglected in comparison. EFT calculations are most conveniently done in $\overline{MS}$. Since loop integrations run unrestricted over all momenta, renormalons inevitably appear in the matching corrections. Alternatively, one might imagine cutting the low momentum region explicitly from the Feynman integrals, absorbing them into nonperturbative parameters in higher orders of the $1/m_Q$-expansion. Disregarding the practical difficulties of this procedure, there is a definite drawback: The nonperturbative parameters are no longer universal and therefore useless (beyond a certain accuracy). However, strictly within the framework of EFT, where nonperturbative effects are not calculated but parameterized, the renormalon phenomenon never constitutes a difficulty. Indeed, if one accepts the assumption that IR renormalons in the coefficient functions are in one-to-one correspondence with (ambiguities of) nonperturbative parameters, one may eliminate these parameters up to a certain order in $1/m_Q$ in favor of physical quantites to obtain predictions for other physical quantities entirely in terms of physical quantities (up to a certain order in $1/m_Q$). Then, the relation between measurable quantities will always be free from renormalons up to renormalons corresponding to a still higher order in $1/m_Q$. (Depending on the definition of the coupling, it might be necessary to eliminate the coupling as well.) The significance of renormalons appears in two respects: First, when one attempts to calculate the subleading nonperturbative parameters such as $\Lambda_{H_Q}$, e.g. from QCD sum rules or lattice gauge theory. In the latter case, the difficulty is rather profound and appears as explicit power divergences in the lattice spacing that require ``nonperturbative subtractions''\cite{6}. Second, the structure of renormalons serves as a check that IR effects are indeed correctly parameterized by matrix elements of higher dimensional operators. As an example, consider the semileptonic decay width for a $B$ meson. To leading order in $1/m_b$, the width is naturally proprotional to $G_F^2 m_{b,pole}^5$. Operator product expansion and HQET predict corrections to the free quark decay starting\cite{7} from $1/m_b^2$ in apparent conflict with an ambiguity of order $\Lambda_{QCD}$ from the IR region in the pole mass. In this case it turns out that the renormalon in the radiative corrections to the free quark decay cancels exactly against the one hidden in the pole mass, when the pole mass is eliminated in favor of a mass parameter that is not sensitive to the Coulomb tail of the self-energy\cite{4,5}, implying consistency with the $1/m_b$-expansion. \vspace*{0.1cm} \bibliographystyle{unsrt}
hep-ph/9408253
\section{Introduction} In the present evidence of a heavy top quark, it is of interest to study in greater detail the phenomenological implications of the infrared fixed point predictions for the top quark mass. The low energy fixed point structure of the Renormalization Group (RG) equation of the top quark Yukawa coupling is associated with large values of this coupling at the high energy scale, which, however, remain in the range of validity of perturbation theory \cite{IR}. Within the Minimal Supersymmetric Standard Model (MSSM) \cite{IR2}, \cite{Dyn}, the infrared fixed point structure determines the value of the top quark mass as a function of $\tan \beta = v_2/v_1$, the ratio of the two Higgs vacuum expectation values. In fact, for a range of high energy values of the top quark Yukawa coupling, such that it can reach its perturbative limit at some scale $M_X = 10^{14}-10^{19}$ GeV, the value of the physical top quark mass is focused to be $M_t =$ 190--210 GeV $ \sin \beta$, where the variation in $M_t$ is mainly due to a variation in the value of the strong gauge coupling, $\alpha_3(M_Z) =$ 0.11--0.13. There is also a small dependence of the infrared fixed point prediction on the supersymmetric spectrum, which, however, comes mainly through the dependence on the spectrum of the running of the strong gauge coupling. Moreover, considering the MSSM with unification of gauge couplings at a grand unification scale $M_{GUT}$ \cite{DGR}, the value of the strong gauge coupling is determined as a function of the electroweak gauge couplings while its dependence on the SUSY spectrum can be characterized by a single effective threshold scale $T_{SUSY}$ \cite{LP}-\cite{CPW}. Thus, the stronger dependence of the infrared fixed point prediction on the SUSY spectrum can be parametrized through $T_{SUSY}$. There is also an independent effect coming from supersymmetric threshold corrections to the Yukawa coupling, which, for supersymmetric particle masses smaller or of the order of 1 TeV, may change the top quark mass predictions in a few GEV, but without changing the physical picture \cite{Wright}. The infrared fixed point structure is independent of the particular supersymmetry breaking scheme under consideration. On the contrary, since the Yukawa couplings -- especially if they are strong -- affect the running of the mass parameters of the theory, once the infrared fixed point structure is present, it plays a decisive role in the resulting (s)particle spectrum of the theory, its predictive power being of course dependent on the number of initial free independent soft SUSY breaking parameters. In addition, to assure a proper breakdown of the electroweak symmetry, one needs to impose conditions on the low energy mass parameters appearing in the scalar potential. Indeed, the condition of a proper radiative SU(2)$_L$ $\times$ U(1)$_Y$ breaking, together with the top quark Yukawa coupling infrared fixed point structure yields interesting correlations among the free high energy mass parameters of the theory, which then translate into interesting predictions for the Supersymmetric (SUSY) spectrum \cite{COPW} - \cite{Gun}. Such correlations depend, however, on the exact soft supersymmetry breaking scheme. In particular, in the minimal supergravity model, in which common masses for all the scalars and gaugino masses at the high energy scale are considered, it follows that, once the value of the top quark mass is given, the whole spectrum is determined as a function of two parameters \cite{COPW},\cite{CW1}. In models in which the universality condition for the high energy mass parameters is relaxed, the predictions derived from the infrared fixed point structure are, instead, weaker. Nevertheless, the infrared fixed point structure implies always an effective reduction by two in the number of free parameters of the theory. The infrared fixed point of the top quark mass is interesting by itself, due to the many interesting properties associated with its behaviour. Moreover, it has been recently observed in the literature that the condition of bottom-tau Yukawa coupling unification in minimal supersymmetric grand unified theories requires large values of the top quark Yukawa coupling at the unification scale \cite{LP}-\cite{CPW}, \cite{Ramond}-\cite{BABE}. Most appealing, in the low and moderate $\tan \beta$ regime, for values of the gauge couplings compatible with recent predictions from LEP and for the experimentally allowed values of the bottom mass, the conditions of gauge and bottom--tau Yukawa coupling unification predict values of the top quark mass within 10$\%$ of its infrared fixed point values \cite{LP},\cite{BCPW}. In section 2 we concentrate on the infrared fixed point structure of the Yukawa couplings. In section 3 we present the evolution of the mass parameters of the theory in the interesting region of low values of $\tan\beta$, to which we shall restrict ourselves for the present study. Our analysis considers both the case of universal and non--universal boundary conditions for the soft supersymmetry beaking scalar mass parameters at the grand unification scale. In section 4 we investigate the theoretical constraints associated with a proper breakdown of the electroweak symmetry and the requirement of stability of the effective potential by avoiding possible color breaking minima. Complementing the above constraints with the properties of the top quark infrared fixed point structure, we define the allowed low energy mass parameter space as a function of their high energy values. In section 5 we present the results of the above analysis translated into predictions for the Higgs and supersymmetric particle spectra. In section 6, a discusion of the precision data variables to be analysed in the present work is presented. The results for the experimental variables as a function of the supersymmetric spectrum is analysed in section 7. In section 8 we analyse the correlations between the different experimental variables and their phenomenological implications. We reserve section 9 for our conclusions. \section{ Infrared Fixed Point Structure} In the Minimal Supersymmetric Standard Model, with unification of gauge couplings at some high energy scale $M_{GUT} \simeq 10^{16}$ GeV, the infrared fixed point structure of the top quark Yukawa coupling may be easily analyzed, in the low and moderate $\tan \beta $ regime (1$\leq \tan \beta <$10), for which the effects of the bottom and tau Yukawa couplings are negligible. Indeed, an exact solution for the running top quark Yukawa coupling may be obtained \cite{Ibanez},\cite{Savoy2} in this regime, \begin{equation} Y_t(t) = \frac{ 2 \pi Y_t(0) E(t)}{ 2 \pi + 3 Y_t(0) F(t)} , \end{equation} where $E$ and $F$ are functions of the gauge couplings, \begin{equation} E = (1 + \beta_3 t)^{16/3b_3} (1 + \beta_2 t)^{3/b_2} (1 + \beta_1 t)^{13/9b_1}, \;\;\;\;\;\;\;\;\;\;\;\; F= \int_{0}^t E(t') dt', \label{eq:topYuk} \end{equation} $Y_t = h_t^2/4\pi$, $\beta_i = \alpha_i(0) b_i/4\pi$, $b_i$ is the beta function coefficient of the gauge coupling $\alpha_i$ and $t = 2 \log(M_{GUT}/Q)$. For large values of $\tan \beta$, instead, the bottom Yukawa coupling becomes large and, in general, a numerical study of the coupled equations for the Yukawa couplings becomes necessary even at the one loop level. For large values of the top quark Yukawa coupling at high energies, Eq. (\ref{eq:topYuk}) tends to an infrared fixed point value, which is independent of the exact boundary conditions at $M_{GUT}$, namely, \begin{equation} Y_t^{f (Y_t \gg Y_b)}(t) \simeq \frac{2 \pi E(t)}{3 F(t)}. \label{eq:IR} \end{equation} For values of the grand unification scale $M_{GUT} \simeq 10^{16}$ GeV, the fixed point value, Eq. (\ref{eq:IR}), is given by $Y_t^f \simeq (8/9) \alpha_3(M_Z)$. Since in this case $F(Q = M_Z) \simeq 300$, the infrared fixed point solution is rapidly reached for a wide range of values of $Y_t(0) \simeq 0.1 - 1$. The fixed point structure for the top quark Yukawa coupling implies an infrared fixed point for the running top quark mass, $m_t(t) = h_t(t) v_2 = h_t(t) v \sin \beta$, with $v^2 = v_1^2 + v_2^2$, \begin{equation} m^{IR}_t(t) = h_f(t) \; v\; \sin\beta = m_t^{IRmax.}(t) \; \sin \beta, \label{eq:mtIR} \end{equation} wher we have neglected the slow running of the Higgs vacuum expectation value at low energies. For $\alpha_3(M_Z) = 0.11$ -- 0.13, $m_t^{IRmax.}$ is approximately given by \begin{equation} m^{IRmax.}_t(M_t) \simeq 196 GeV [1+ 2( \alpha_3(M_Z) - 0.12) ]. \label{eq:mtIRmax} \end{equation} One should remember that there is a significant quantitative difference between the running top quark mass, and the physical top quark mass $M_t$, defined as the location of the pole in its two point function. The main source of this difference comes from the QCD corrections, which at the one loop level are given by \begin{equation} M_t = m_t(M_t) \left( 1 + 4 \alpha_3(M_t) /3 \pi \right). \label{eq:Mt} \end{equation} A numerical two loop RG analysis, shows the stability of the infrared fixed point under higher order loop contributions \cite{BABE},\cite{CPW}. A similar exact analytical study can be done for the large $\tan \beta$ regime, when the bottom and top Yukawa couplings are equal at the unification scale, by neglecting in a first approximation the effects of the tau Yukawa coupling and identifying the right-bottom and right-top hypercharges. The approximate solution for $Y= Y_t \simeq Y_b$ reads \cite{wefour}, \begin{equation} Y(t) = \frac{ 4 \pi Y(0) E(t)}{ 4 \pi + 7 Y(0) F(t)} \end{equation} Then, if the Yukawa coupling is large at the grand unification scale, at energies of the order of the top quark mass it will develop an infrared fixed point value approximately given by \cite{CPW}, \cite{wefour}, \begin{equation} Y_t^{f\;(Y_t=Y_b)}(t) \simeq \frac{4 \pi E(t)}{7 F(t)} \simeq \frac{6}{7} Y_t^{f(Y_t \gg Y_b)}(t). \end{equation} An approximate expression for the fixed point solution may be found also for values of the bottom Yukawa coupling different from the top quark one \cite{CW}. In general, in the large $\tan\beta$ region the bottom quark Yukawa coupling becomes strong and plays an important role in the RG analysis \cite{somp}. The are also possible large radiative corrections to the bottom mass coming from loops of supersymmetric particles, which are strongly dependent on the particular spectrum and are extremely important in the analysis, if unification of bottom and tau Yukawa couplings is to be considered \cite{Hall}--\cite{wefour}. Moreover, in some of the minimal models of grand unification, large $\tan \beta$ values are in conflict with proton decay constraints \cite{AN}. In the special case of tau-bottom-top Yukawa coupling unification the infrared fixed point solution for the top quark mass is not achievable unless a relaxation in the high energy boundary conditions of the mass parameters of the theory is arranged, and it is necessarily associated with a heavy supersymmetric spectrum \cite{nonunl}. In the following we shall concentrate on the low and moderate $\tan \beta$ region. \section{Evolution of the Mass Parameters} In this work we shall consider soft supersymmetry breaking mass terms for all the scalars and gauginos of the theory, as well as trilinear and bilinear couplings $A_i$ (with i= leptons, up quarks and down quarks) and $B$ in the full scalar potential, which are proportional to the trilinear and bilinear terms appearing in the superpotential. In the framework of minimal supergravity the soft supersymmetry breaking parameters are universal at the grand unification scale. This implies common soft supersymmetry breaking mass terms $m_0$ and $M_{1/2}$ for the scalar and gaugino sectors of the theory, respectively, and a common value $A_0$ ($B_0$) for all trilinear (bilinear) couplings $A_i$ ($B$). In addition, the supersymmetric Higgs mass parameter $\mu$ appearing in the superpotential takes a value $\mu_0$ at the grand unification scale $M_{GUT}$. In the present work we shall consider a more general case, in which the condition of universality of the soft supersymmetry breaking scalar mass parameters is relaxed. We shall, however, asume that SU(5) is a subgroup of the grand unification symmetry group and, hence, we shall keep the relations between the soft supersymmetry breaking mass parameters that preserve the SU(5) symmetry. This implies a common gaugino soft supersymmetry breaking mass parameter, common values for the soft supersymmetry breaking parameters of the right and left handed scalar top quarks, but free, independent values for the two Higgs mass parameters at $M_{GUT}$. The relevance of non--universal soft supersymmetry breaking parameters for the spectrum of the theory in the low $\tan\beta$ regime has been recently emphasized in several works \cite{nonun}. For definiteness, we shall identify all squark and slepton mass parameters with the ones of the stop quark ones. This requirement has little influence in our analyisis, which mainly depends on the Higgs, chargino and stop spectra. Knowing the values of the mass parameters at the unification scale, their low energy values may be specified by their renormalization group evolution \cite{Ibanez}-\cite{BG}, which contains also a dependence on the gauge and Yukawa couplings. In particular, in the low and moderate $\tan \beta$ regime, in which the effects of the bottom and tau Yukawa couplings are negligible, it is possible to determine the evolution of the soft supersymmetry breaking mass parameters of the model as a function of their high energy boundary conditions and the value of the top quark Yukawa coupling at $M_{GUT}$, $Y_t(0)$. Indeed, using Eq. (\ref{eq:IR}) and renaming $Y_t^{f(Y_t \gg Y_b)} = Y_f(t)$, it follows, \begin{equation} \frac{6 Y_t(0) F(t)}{4 \pi} = \frac{Y_t(t)/ Y_f(t)}{ 1 - Y_t(t)/ Y_f(t)}\;, \end{equation} with $Y_t/Y_f = h_t^2/ h^2_f$ the ratio of Yukawa couplings squared at low energies. The above equation permits to express the boundary condition of the top quark Yukawa coupling as a function of the gauge couplings (through F) and the ratio $Y_t / Y_f$ \cite{Ibanez}-\cite{BG} giving definite predictions for the low energy mass parameters of the model in the limit $h_t \rightarrow h_f$ \cite{COPW}. Thus, considering the limit of small $\tan\beta$, $\tan\beta < 10$, the following approximate analytical solutions are obtained for the case of non--universal parameters at $M_{GUT}$, \begin{eqnarray} m_L^2 & =& m_L^2(0) + 0.52 M_{1/2}^2 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; m_E^2 = m_E^2(0) + 0.15 M_{1/2}^2 \nonumber \\ \nonumber \\ m_{Q(1,2)}^2 & =& m_{Q(1,2)}^2(0) + 7.2 M_{1/2}^2 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; m_{U(1,2)}^2 \simeq m_{U(1,2)}^2(0) + 6.7 M_{1/2}^2 \nonumber \\ \nonumber \\ m_D^2 & \simeq & m_D^2(0) + 6.7 M_{1/2}^2 \nonumber \\ \nonumber \\ m_Q^2 &=& 7.2 M_{1/2}^2 + m_Q^2(0) + \frac{\Delta m^2}{3} \nonumber \\ m_U^2 &=& 6.7 M_{1/2}^2 + m_U^2(0) + 2 \frac{\Delta m^2}{3} \label{eq:todas} \end{eqnarray} where E, D and U are the right handed leptons, down-squarks and up-squarks, respectively, L and Q = (T B)$^T$ are the lepton and top-bottom left handed doublets and $m_{\eta}^2$, with $\eta=E,D,U,L,Q$ are the corresponding soft supersymmetry breaking mass parameters. The subindices (1,2) are to distinguish the first and second generations from the third one, whose mass parameters receive the top quark Yukawa coupling contribution to their renormalization group evolution, singled out in the $\Delta m^2$ term, \begin{eqnarray} \Delta m^2 & = & - \frac{ \left(m^2_{H_2}(0) + m_U^2(0) + m_Q^2(0)\right) }{2} \frac{Y_t}{Y_f} + 2.3 A_0 M_{1/2} \frac{Y_t}{Y_f} \left( 1 - \frac{Y_t}{Y_f} \right) \nonumber\\ & - & \frac{A_0^2}{2} \frac{Y_t}{Y_f} \left( 1 - \frac{Y_t}{Y_f} \right) + M_{1/2}^2 \left[ - 7 \frac{Y_t}{Y_f} + 3 \left( \frac{Y_t}{Y_f} \right)^2 \right] \; . \label{eq:dm} \end{eqnarray} For the Higgs sector, the mass parameters involved are \begin{equation} m_{H_1}^2 = m_{H_1}^2(0) + 0.52 M_{1/2}^2 \;\;\;\;\; and \;\;\;\;\;\; m_{H_2}^2 = m_{H_2}^2(0) + 0.52 M_{1/2}^2 + \Delta m^2\; , \label{eq:m12} \end{equation} which are the soft supersymmetry breaking parts of the mass parameters $m_1^2$ and $m_2^2$ appearing in the Higgs scalar potential (see section 4). Moreover, the renormalization group evolution for the supersymmetric mass parameter $\mu$ reads, \begin{equation} \mu^2 \simeq 2 \mu_0^2 \left( 1 - \frac{Y_t}{Y_f} \right)^{1/2} \; , \label{eq:mu} \end{equation} while the running of the soft supersymmetry breaking bilinear and trilinear couplings gives, \begin{equation} B = B_0 - \frac{A_0}{2} \frac{Y_t}{Y_f} + M_{1/2} \left(1.2 \frac{Y_t}{Y_f} - 0.6 \right). \label{eq:b0} \end{equation} \begin{equation} A_t = A_0 \left(1 - \frac{Y_t}{Y_f} \right) - M_{1/2} \left(4.2 - 2.1 \frac{Y_t}{Y_f} \right), \label{eq:a0} \end{equation} respectively. Eq. (\ref{eq:mu}) shows that the RG evolution of the supersymmetric mass parameter $\mu$, appearing in the superpotential. Observe that $\mu$ formally vanishes at low energies in the limit $Y_t \rightarrow Y_f$. However, since $\mu \simeq \sqrt{2} \mu_0 \; (1 - Y_t/Y_f)^{1/4}$, $\mu$ stays of order $\mu_0$ even for values all values of $Y_t$ within the range of validity of perturbation theory at high energies, $Y_t(0) \leq 1$ ($Y_t/Y_f \leq 0.995$) \cite{COPW}. The coefficients characterizing the dependence of the mass parameters on the universal gaugino mass $M_{1/2}$ depend on the exact value of the strong gauge couplings. In the above, we have taken the values of the coefficients that are obtained for $\alpha_3(M_Z) \simeq 0.12$. The above analytical solutions are sufficiently accurate for the purpose of understanding the properties of the mass parameters in the limit $Y_t \rightarrow Y_f$. \section{Constraints on the Fixed Point Solutions} The solutions for the mass parameters may be strongly constrained by experimental and theoretical restrictions. The experimental contraints come from the present lower bounds on the supersymmetric particle masses \cite{Partd}. Concerning the theoretical constraints, many of them impose bounds on the allowed space for the soft supersymmetry breaking parameters in model dependent ways to various degrees. The conditions of stability of the effective potential and a proper breaking of the SU(2)$_L$ $\times$ U(1)$_Y$ symmetry are, instead, basic necessary requirements, which, complemented with the properties derived from the infrared fixed point structure, yield robust correlations among the free parameters of the theory. \subsection{Radiative electroweak symmetry breaking} The Higgs potential of the Minimal Supersymmetric Standard Model may be written as \cite{Dyn}, \cite{CSW}-\cite{HH} \begin{eqnarray} V_{eff} & = & m_1^2 H_1^{\dagger} H_1 + m_2^2 H_2^{\dagger} H_2 - m_3^2 (H_1^T i \tau_2 H_2 + h.c.) \nonumber\\ & + & \frac{\lambda_1}{2} \left(H_1^{\dagger} H_1 \right)^2 + \frac{\lambda_2}{2} \left(H_2^{\dagger} H_2 \right)^2 + \lambda_3 \left(H_1^{\dagger} H_1 \right) \left(H_2^{\dagger} H_2 \right) + \lambda_4 \left| H_2^{\dagger} i \tau_2 H_1^* \right|^2 , \end{eqnarray} with $m_i^2 = \mu^2 + m_{H_i}^2$, $i = 1,2$, and $m_3^2 = B |\mu|$ and where at scales at which the theory is supersymmetric the running quartic couplings $\lambda_j$, with $j = 1 - 4$, must satisfy the following conditions: \begin{equation} \lambda_1 = \lambda_2 = \frac{ g_1^2 + g_2^2}{4} = \frac{M_Z^2}{2\;v^2} ,\;\;\;\;\; \lambda_3 = \frac{g_2^2 - g_1^2}{4},\;\;\;\;\; \lambda_4 = - \frac{g_2^2}{2} = \frac{M_W^2}{v^2}. \end{equation} Hence, in order to obtain the low energy values of the quartic couplings, they must be evolved using the appropriate renormalization group equations, as was explained in Refs. \cite{CSW}-\cite{Chankowski}. The mass parameters $m_i^2$, with $i = 1$-$3$ must also be evolved in a consistent way and their RG equations may be found in the literature \cite{Ibanez}-\cite{Savoy2},\cite{Inoue},\cite{OP}. The minimization conditions $\partial V/ \partial H_i |_{<H_i>=v_i} =0$, which are necessary to impose the proper breakdown of the electroweak symmetry, read \begin{equation} \sin(2\beta) = \frac{ 2 m_3^2 }{m_A^2} \label{eq:s2b} \end{equation} \begin{equation} \tan^2\beta = \frac{m_1^2 + \lambda_2 v^2 + \left(\lambda_1 - \lambda_2 \right) v_1^2}{m_2^2 + \lambda_2 v^2}, \label{eq:tb} \end{equation} where $m_A$ is the CP-odd Higgs mass, \begin{equation} m_A^2 = m_1^2 + m_2^2 + \lambda_1 v_1^2 + \lambda_2 v_2^2 + \left( \lambda_3 + \lambda_4 \right) v^2 . \end{equation} Considering the case of negligible stop mixing, and in the low $\tan\beta$ regime, the radiative corrections to the quartic couplings $\lambda_i$, with $i = 1, 3$ are small, while $\Delta \lambda_2 = (3/ 8 \pi^2) h_t^4 \ln (m_{\tilde{t}}^2/m_t^2)$. In this case, the minimization condition Eq. (\ref{eq:tb}), can be rewritten as \cite{CSW}: \begin{equation} \tan^2\beta = \frac{m_1^2 + M^2_Z/2}{m_2^2 + M^2_Z/2 + \Delta \lambda_2 v_2^2}. \label{eq:tb2} \end{equation} Considering the minimization condition, Eq. (\ref{eq:tb}) and the approximate analytical expressions for the mass parameters $m_i$, Eq. (\ref{eq:m12}), the supersymmetric mass parameter $\mu$ is determined as a function of the other free parameters of the theory, \begin{eqnarray} \mu^2 &=& \frac{1}{\tan^2 \beta -1} \left( m_{H_1}^2 - m_{H_2}^2 \tan^2 \beta - \Delta \lambda_2 v_2^2 \tan^2 \beta \right) \nonumber \\ &=& {\cal{F}}( m_{H_1}(0), m_{H_2}(0), m_Q(0), m_U(0), M_{1/2}, A_0, \tan \beta, Y_t /Y_f) . \label{eq:calF} \end{eqnarray} A somewhat more complicated expression is obtained for the case of mixing in the stop sector \cite{HH}. The other minimization condition, Eq. (\ref{eq:s2b}) also puts restrictions on the soft supersymmetry breaking parameters. It determines the value of the parameter $\delta = B_0 - A_0$ as a function of the other parameters of the theory \cite{COPW}. However, as we shall show below, at the fixed point solution the mass parameter $\delta$ is not directly related to the range of possible mass values of the Higgs and supersymmetric particles. \subsection{Properties of the Fixed Point Solution.} The ratio of the top quark Yukawa coupling to its infrared fixed point value may be expressed as a function of the top quark mass and the angle $\beta$, \begin{equation} \frac{Y_t}{Y_f} = \left( \frac{m_t}{m_t^{IRmax}} \right)^2 \frac{1}{\sin^2 \beta}, \label{eq:Y} \end{equation} where the exact value of $m_t^{IRmax.}$, Eq. (\ref{eq:mtIRmax}), depends on the value of the strong gauge coupling considered, and for the experimentally allowed range it varies approximately between 190--200 GeV. Depending on the precise value of the running top quark mass $m_t$ and $\tan \beta$, the above equation gives a measure of the proximity to the infrared fixed point solution. In the limit $Y_t \rightarrow Y_f$, the strong correlation between the top quark mass and the value of $\tan\beta$, Eq.(\ref{eq:mtIR}), allows to reduce by one the number of free parameters of the theory. Moreover, at the infrared fixed point, the expressions for the low energy parameters, Eqs. (\ref{eq:todas})-(\ref{eq:a0}), show that the term $\Delta m^2$ and hence the mass parameters $m_{H_2}^2$, $m_Q^2$ and $m_U^2$ become very weakly dependent on the supersymmetry breaking parameter $A_0$. In fact, the dependence on $A_0$ vanishes in the formal limit $Y_t \rightarrow Y_f$ \cite{COPW}. The only relevant dependence on $A_0$ enters through the mass parameter $B$. Therefore, at the infrared fixed point, there is an effective reduction in the number of free independent soft supersymmetry breaking parameters. In fact, the dependence on $B_0$ and $A_0$ of the low energy solutions is effectively replaced by a dependence on the parameter $\delta = B_0 - A_0/2$. Since B is not involved in the RG evolution of the (s)particle masses and the squark and slepton mixing for sparticles other than the top squark is very small, the above implies that at the infrared fixed point the dependence of the Higgs and supersymmetric spectrum on the parameter $A_0$ is negligible \cite{COPW}. Hence, the infrared fixed point structure translates in a net reduction by two in the number of free parameters which are relevant in determining the spectrum of the theory. There is also a very interesting behaviour of the low energy mass parameter combination \begin{equation} M_{UQ}^2 = m_Q^2 + m_U^2 + m_{H_2}^2 \end{equation} at the infrared fixed point. Indeed, the dependence of $M_{UQ}$ on its high energy boundary condition, $M_{UQ}^2(0) = m_Q^2(0) + m_U^2(0) + m_{H_2}^2(0)$ vanishes in the formal limit $Y_t \rightarrow Y_f$. It follows that the infrared fixed point structure of the top quark Yukawa coupling yields an infrared fixed point for the soft supersymmetry breaking parameter $A_t$ as well as for the combination $M_{UQ}^2$. Summarizing, for a given value of the physical top quark mass, the running top quark mass is fixed and then at the infared fixed point Eq. (\ref{eq:Y}) fixes $\sin \beta$. Due to the strong correlation of the top quark mass with $\tan\beta$ and the independence of the spectrum on the parameter $A_0$, for a given top quark mass the Higgs and supersymmetric particle spectrum is completely determined as a function of only the high energy boundary conditions for the scalar and gaugino mass mass parameters. It is then possible to perform a scanning of all the possible values for $m_Q(0)$ ($m_Q(0) \equiv m_U(0)$), $m_{H_1}(0)$, $m_{H_2}(0)$ and $M_{1/2}$, bounding the squark masses to be, for example, below 1 TeV, and the whole allowed parameter space for the Higgs and superparticle masses may be studied. In the following we shall study different boundary conditions for the soft supersymmetry breaking mass parameters, concentrating on those which may yield interesting features for the low energy spectrum. In particular, we shall also consider the case in which all soft supersymmetry breaking scalar masses acquire a common value at the high energy scale, which gives an extremely predictive framework with only two parameters determining the whole Higgs and supersymmetric spectrum. \subsection{Color breaking minima} There are several conditions which need to be fulfilled to ensure the stability of the electroweak symmetry breaking vacuum. In particular, one should check that no charge or color breaking minima are induced at low energies. A well known condition for the absence of color breaking minima is given by the relation \cite{ILEK} \begin{equation} A_t^2 \leq 3 (M_{UQ}^2) + 3 \mu^2. \end{equation} At the fixed point, however, since $A_t \simeq -2.1 M_{1/2}$ and $M_{UQ}^2 \simeq 6 M_{1/2}^2$, this relation is trivially fulfilled \cite{Nir}, \cite{CW1}. For values of $\tan\beta$ close to one, large values of $\mu$ are induced and a more appropriate relation is obtained by looking for possible color breaking minima in the direction $\langle H_2 \rangle \simeq \langle H_1 \rangle$ and $\langle Q \rangle \simeq \langle U \rangle$ \cite{Drees},\cite{Nir}. The requirement of stability of the physically acceptable vacuum implies the following sufficient condition \cite{CW1}, \begin{equation} \left( A_t - \mu \right)^2 \leq 2 \left( m_Q^2 + m_U^2 \right) + \tilde{m}_{12}^2 \label{eq:cond1} \end{equation} where $\tilde{m}_{12}^2 = \left( m_1^2 + m_2^2 \right) (\tan\beta - 1)^2 / (\tan\beta^2 + 1 ) $. If Eq.(\ref{eq:cond1}) is not fulfilled, a second sufficient condition is given by \begin{equation} \left[ \left( A_t - \mu \right)^2 - 2 \left( m_Q^2 + m_U^2 \right) - \tilde{m}_{12}^2 \right]^2 \leq 8 \left( m_Q^2 + m_U^2 \right) \tilde{m}_{12}^2. \label{eq:cond2} \end{equation} The above relations, Eqs. (\ref{eq:cond1}) and (\ref{eq:cond2}) are sufficient conditions since they assure that a color breaking minima lower than the trivial minima does not develop in the theory. If the above conditions are violated, a necessary condition to avoid the existence of a color breaking minima lower than the physically acceptable one is given by \begin{equation} V_{col} \geq V_{ph} \label{eq:cond3} \end{equation} with \begin{equation} V_{col} = \frac{(A_t - \mu)^2 \alpha_{min}^2}{ h_t^2 (2 \alpha_{min}^2 + 1 )^3 } \left[ ( m_Q^2 + m_U^2) - 2 \tilde{m}_{12}^2 \alpha_{min}^4 \right], \end{equation} \begin{equation} V_{ph} = - \frac{M_Z^4}{2 ( g_1^2 + g_2^2)} \cos^2 (2\beta), \end{equation} and \begin{equation} \alpha_{min}^2 = \left[(A_t - \mu)^2 - 2 (m_Q^2 + m_U^2) - \tilde{m}_{12}^2 \right]/(4 \tilde{m}_{12}^2). \end{equation} For some of the non-universal conditions one may consider, the right handed stop supersymmetry breaking mass parameter $m_U^2$ can get negative values. In this case a color breaking minimum may develop in the direction $\langle U \rangle \neq 0$. The value of the tree level potential at this minimum would be \begin{equation} V_{U} = - \frac{9}{8 g_1^2} m_U^4, \;\;\;\;\;\;\;\;\;\; (m_U^2 < 0) \label{eq:minimu} \end{equation} which should be higher than $V_{ph}$ in order to avoid an unacceptable vacuum state. For low values of $\tan\beta \leq 1.5$, the range of parameters leading to a negative value of $m_U^2$, are automatically excluded either because they lead to values of the lightest CP-even Higgs mass which are experimentally excluded (particularly for $\mu < 0$) or because they lead to tachyons in the stop sector or are in conflict with the absence of color breaking minima in the other directions analysed before, Eqs. (\ref{eq:cond1})--(\ref{eq:cond3}). For larger values of $\tan\beta$ and negative values of $\mu$, for which the mixing in the stop sector is small, the requirement $V_{U} \geq V_{ph}$, with $V_U$ given in Eq. (\ref{eq:minimu}), becomes, however, relevant. \section{Higgs and Supersymmetric Particle Spectrum} As we mentioned before, for low values of $\tan\beta$ and for a given value of the top quark mass, the whole spectrum is determined as a function of the free independent soft supersymmetry breaking parameters, $m_Q(0)$ (where $m_Q(0) = m_{\tilde{q}}(0) = m_{\tilde{l}}(0)$), $m_{H_1}(0)$, $m_{H_2}(0)$ and $M_{1/2}$. Summarizing the results for the relevant low energy mass parameters at the fixed point solution we have: \begin{eqnarray} m_{H_2}^2 & \simeq & \frac{m_{H_2}^2(0)}{2} - m_Q^2(0) - 3.5 M_{1/2}^2 \;\;\;\;\;\;\;\;\;\;\;\;\;\; m_{H_1}^2 \simeq m_{H_1}^2(0) + 0.5 M_{1/2}^2 \nonumber\\ m_Q^2 & \simeq & \frac{2 m_Q^2(0)}{3} - \frac{m_{H_2}^2(0)}{6} + 6 M_{1/2}^2 \nonumber \\ m_U^2 & \simeq & \frac{m_Q^2(0)}{3} - \frac{m_{H_2}^2(0)}{3} + 4 M_{1/2}^2 \nonumber\\ A_t & \simeq & - 2.1 M_{1/2} \nonumber\\ \mu^2 & \simeq & \left[ m_{H_1}^2(0) + \left(\frac{2 m_Q^2(0) - m_{H_2}^2(0)}{2} \right) \tan^2 \beta \right. \nonumber\\ & + & \left. M_{1/2}^2 \left( 0.5 + 3.5 \tan^2\beta \right) \right] \frac{1}{ \tan^2\beta - 1 } \label{eq:massp} \end{eqnarray} In the following, we shall analyze the contributions of possible light supersymmetric particles to the hadronic and leptonic variables measured at LEP. In fact, concerning indirect searches at LEP, the existence of light charginos and stops may yield interesting supersymmetric signals. The Higgs sector is of course very interesting in itself and it also plays an important role in deriving constraints on the soft supersymmetry breaking parameters, which then translate into restrictions for the stop chargino sectors as well. The dependence of the properties of the spectrum on the high energy boundary conditions for the soft supersymmetry breaking parameters is very important and we shall consider different interesting possibilities in a detailed way. In Table 1 we display the dominant dependence of the low energy scalar mass parameters on their high energy values for the three characteristic soft supersymmetry breaking schemes we shall analyse in this work. The case of universal soft supersymmetry breaking parameters, in which all the soft supersymmetry breaking scalar masses acquire a common value, say $m_0$, is the most predictive one. In such limit, for a given value of the top quark mass, the whole Higgs and supersymmetric particle spectrum is determined as a function of only two parameters, $m_0$ and the common gaugino mass $M_{1/2}$ \cite{COPW}. Another interesting case is the one in which the dependence of $\mu^2$ on the soft supersymmetry breaking parameters of the scalar fields vanishes, implying smaller values for the supersymmetric mass parameter and hence a stronger Higgsino component of the light chargino than in the universal case. This situation follows, in a $\tan \beta$ independent way, if $m_{H_1}(0)$ = 0 and $m_{H_2}^2(0) = 2 m_Q^2(0)$. As can be seen in Table 1, the parameter $m_U^2$ can be render small or negative by increasing $m_0$, what increases the right handed component of the lightest stop with respect to the one in the case of universal soft supersymmetry breaking parameters at $M_{GUT}$. As we shall discuss below, a larger Higgsino (right stop) component of the lightest chargino (stop) implies an increase in the supersymmetric $Z^0$--$b\bar{b}$ vertex corrections. A non--universal condition for the scalar soft supersymmetry breaking mass parameters can also yield larger values for the stop mass parameters. This situation may be achieved if, for example, we invert the relations used above for $m_{H_1}^2(0)$ and $m_{H_2}^2(0)$, that is to say, $m_{H_1}^2(0) = 2 m_Q^2(0)$ and $m_{H_2}^2(0) = 0$. Then, parametrizing the scalar masses as a function of $m_{H_1}(0)$, the values of $\mu^2$ and $m_Q^2 + m_U^2$ have the same functional dependence on $m_{H_1}^2(0)$ as the one as a function of $m_0^2$ in the universal case. However, both $m_Q^2$ and $m_U^2$ increase with $m_{H_1}^2(0)$, breaking the strong correlation present in the universal case between the lightest stop and the gaugino masses \cite{COPW}, \cite{nonun}. Indeed, even for light charginos, large values of the lightest stop mass may be obtained in this case by taking large values for the scalar mass parameters at the grand unification scale. {}~\\ \baselineskip = 25pt \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline & & & & \\ Conditions at $M_{GUT}$ &$\;\; m_Q^2 \;\;$ & $\;\; m_U^2 \;\;$ & $\;\; m_{H_2}^2 \;\;$ & $\;\; m_{H_1}^2 \;\;$ \\ & & & & \\ \hline & & & & \\ Universal $\; m_0^2$ & {\large $\frac{m_0^2}{2}$} & 0 &{\large $- \frac{m_0^2}{2}$} & $m_0^2$ \\ & & & & \\ \hline Case I: & & & & \\ $m_{H_1}^2(0) = 0 \;$, $m_{H_2}^2(0) = 2 m_Q^2(0)$ & {\large $\frac{m_0^2}{6}$} & {\large $- \frac{m_0^2}{6}$} & 0 & 0 \\ $m_{H_2}^2(0) = m_0^2$ & & & & \\ \hline Case II: & & & & \\ $m_{H_2}^2(0) = 0 \;$, $m_{H_1}^2(0) = 2 m_Q^2(0)$ & {\large $\frac{m_0^2}{3}$} & {\large $\frac{m_0^2}{6}$} & {\large $- \frac{m_0^2}{2}$} & $m_0^2$ \\ $m_{H_1}^2(0) = m_0^2$ & & & & \\ \hline \end{tabular} \end{center} {}~\\ \baselineskip = 10pt {\small Table 1. Dominant dependence of the low energy soft supersymmetry breaking parameters on their values at the grand unification scale, for a top quark mass at its infrared fixed point value.}\\ {}~\\ \baselineskip = 16pt {}~\\ The value of the supersymmetric mass parameter $\mu$ may be obtained from the other mass parameters through the condition of a proper radiative electroweak symmetry breaking, Eqs. (\ref{eq:tb}), (\ref{eq:massp}). In the following, we shall analyze these three possibilities considering cases I and II as characteristic ones for the study of the possible implications of the deviations from the universal boundary conditions in the soft supersymmetry breaking parameters associated with the scalar sector. \subsection{Stop and Chargino Sectors} Due to the large values of the mass parameter $\mu$ at the infrared fixed point for low values of $\tan\beta$, there is small mixing in the chargino and neutralino sectors. Hence, to a good approximation the lightest chargino mass and the lightest and next to lightest neutralino masses are given by $m_{\tilde{\chi}^{\pm}_l} \simeq m_{\tilde{\chi}^0_2} \simeq 2 m_{\tilde{\chi}^0_1} \simeq 0.8 M_{1/2}$. This approximate dependence becomes more accurate when larger values of $M_{1/2}$ are considered. For low values of $M_{1/2}$, although the lightest chargino is still mainly a wino, it follows that for positive values of $\mu$, slightly larger values of $M_{1/2}$ are necessary to get a light chargino, with mass close to their production threshold at the $Z^0$ peak, than those required for negative values of $\mu$. Concerning the restrictions on the parameter space, more interesting than the chargino sector becomes the stop sector, for which these large values of $\mu$ may render the physical squared stop mass negative or too small to be consistent with the present experimental bounds, which we shall take to be $m_{\tilde{t}} > 45$ GeV. The stop mass matrix is given by, \begin{eqnarray} M^2_{\tilde{t}} = \left[ \begin{tabular}{c c} $m_Q^2 + m^2_t + D_{t_L}$ & $ m_t (A_t - \mu/ \tan \beta)$ \\ $ m_t (A_t - \mu/ \tan \beta)$ & $m_U^2 + m^2_t + D_{t_R}$ \end{tabular} \right] \label{eq:stopmat} \end{eqnarray} where $D_{t_L} \simeq - 0.35 M_Z^2 |cos 2 \beta|$ and $D_{t_R} \simeq - 0.15 M_Z^2 |cos 2 \beta|$ are the D-term contributions to the left and right handed stops, respectively. The above mass matrix, after diagonalization, leads to the two stop mass eigenvalues, $m_{\tilde{t}_1}$ and $m_{\tilde{t}_2}$. At the infrared fixed point, the values of the parameters involved in the mass matrix are given in Eq. (\ref{eq:massp}). For values of $\tan \beta$ close to one, the off-diagonal term contribution will be enhanced due to the large values of $\mu$ associated with such low values of $\tan \beta$ and, consequently, the mixing may be sufficiently large to yield a tachyonic solution \cite{COPW}-\cite{NA2}. Thus, depending on the case considered for the soft supersymmetry breaking scalar masses and its hierarchy with respect to $M_{1/2}$, as well as on the sign of $\mu$, important constraints on the parameter space may be obtained. For example, if we consider first the universal case with a common scalar mass $m_0$, for $\tan \beta = 1.2$, which implies $M_t \simeq 160$ GeV and for which the value of the supersymmetric mass parameter $\mu^2 \simeq 4 m_0^2 + 12 M_{1/2}^2$, it is straighforward to show that, if one considers the regime $M_{1/2}^2 \ll m_0^2$ for both signs of $\mu$ (or $M_{1/2}^2 > m_0^2$ for $\mu > 0$), then a tachyon state will develop unless $M_{1/2} \geq \; m_t$. For $M_{1/2}^2 > m_0^2$ and $\mu < 0$, since there is a partial cancellation of the off-diagonal term which suppresses the mixing, no tachyonic solution may develop and, hence, no constraint is derived from these considerations. However, as we shall show below, restrictions coming from the Higgs sector will constrain this region of parameter space as well. Observe that for these low values of $\tan\beta$, the necessary and sufficient conditions to avoid a color breaking minima, Eqs. (\ref{eq:cond1}), (\ref{eq:cond2}) and (\ref{eq:cond3}), put strong restrictions on the solutions with large left--right stop mixing. For slightly larger values of $\tan \beta \simeq 1.8$, which correspond to much larger values of the top quark mass, $M_t \simeq 180 GeV$, the value of $\mu^2 \simeq 1.2 m_0^2 + 5.3 M_{1/2}^2$ is sufficiently small so that, helped by the factor $1/\tan \beta$ appearing in the off--diagonal terms in Eq.(\ref{eq:stopmat}), there is no possibility for a tachyon to develop in this case and, hence, no constraints on $M_{1/2}$ are obtained. Of course, this result holds for larger values of $\tan \beta$ as well. It is interesting to notice that, although there is no necessity to be concerned about tachyons for values of $\tan \beta \simeq 1.8$, it is still possible to have light stops, $m_{\tilde{t}_1} < 150$ GeV, if the value of $M_{1/2} \leq 100$ GeV. For larger values of $\tan \beta$ ($M_t > 185$ GeV) a light stop is not possible any longer in the universal case. Figure 1 shows the dependence of the stop mass on the chargino mass in the case of universal scalar masses at $M_{GUT}$, for four different values of the top quark mass. For low values of $M_t \leq 165$ GeV, the color breaking constraints forbids large mixing in the stop sector and, due to the behaviour of $m_U^2 \simeq 4 M_{1/2}^2$, a strong correlation between the lightest stop and the lightest chargino is observed. For larger values of $M_t$, larger mixing is allowed and a clear distinction between the two signs of $\mu$ is observed. This distinction is particularly clear for $M_t \simeq 175$ GeV ($\tan\beta \simeq 1.5$) and disappears for larger values of $\tan\beta$. Observe that just for the interesting region 165 GeV $\leq M_t \leq$ 185 GeV both light stops and light charginos ($m_{\tilde{\chi}_1} < 70$ GeV) are allowed. For larger (lower) values of $M_t$, it becomes more difficult to get light stops (charginos). Light stops and charginos are very interesting, both for direct experimental searches \cite{LNZ} as well as for indirect searches through deviations from the Standard Model predictions for the leptonic and hadronic variables measured at LEP (see below). If we consider the non--universal case with $m_{H_1}^2(0)$ = 0 and $m_{H_2}^2(0)/2 = m_Q^2(0) = m_0^2/2 $ (case I), then if $M_{1/2}$ dominates the supersymmetry breaking, the constraints coming from the requirement of avoiding a very small stop mass are equivalent to the ones obtained in the case of universal mass parameters. If $M_{1/2}$ is much smaller than the parameter $m_0$, instead, an upper bound on the scalar mass parameter is obtained, $m_0^2 < 6 m_t^2$. More generally, in the regime of large values of $m_0$ and moderate values of $M_{1/2}$, it follows that for positive values of $\mu$, in order to avoid tachyons, \begin{equation} m_t^2 > 0.5 \left[ K M_{1/2}^2 + \sqrt{ \left(K M_{1/2}^2 \right)^2 - 96 M_{1/2}^4 + \left( \frac{m_0^2}{3} \right)^2 + \frac{4}{3} m_0^2 M_{1/2}^2 } \right] \label{eq:fillin} \end{equation} with $K \simeq 10, 4.5, 0.8$ for $M_t \simeq 165, 175, 185$ GeV ($\tan\beta \simeq 1.3, 1.5, 1.9$). For $M_t \leq 160$ GeV ($\tan\beta \leq 1.2$) this condition cannot be fulfilled since for values of $M_{1/2}$ consistent with the present experimental bounds on the chargino and gluino masses, already the $M_{1/2}$ dependent part violates the above bound. For $M_t = 165 $ GeV there is a small region for which $M_{1/2}$ is rather small, $m_0$ is rather large and for which this condition is fulfilled (see Fig. 2). For negative values of $\mu$, the off--diagonal terms are small and with a very weak dependence on the soft breaking parameters $m_0$ and $M_{1/2}$. Hence, one obtains a bound which is basically equivalent to the positivity of the diagonal term, \begin{equation} m_U^2 > - (m_t^2 + D_{t_R}). \label{eq:posdiag} \end{equation} In the present case, Eq.(\ref{eq:posdiag}) is equivalent to \begin{equation} m_0^2 < 6 \left( m_t^2 + D_{t_R} + 4 M_{1/2}^2 \right). \label{eq:m0bound} \end{equation} Quite generally, the condition of absence of color breaking, derived from Eq. (\ref{eq:minimu}) assures the fulfillment of Eq.(\ref{eq:posdiag}) since the actual bound coming from the absence of color breaking is stronger than the one implied by Eq. (\ref{eq:m0bound}). Fig. 2 shows the dependence of the lightest stop mass on the lightest chargino mass for case I of non--universal mass parameters at $M_{GUT}$ and four different values of the top quark mass. For low values of $M_t \leq 160$ GeV, light charginos are not allowed. This is due to the Higgs bounds and the impossibility of getting large radiative corrections due to the bounds on $m_0$ derived from constraints in the stop sector and the absence of a color breaking minimum. For $M_t \simeq 165$ GeV, there is a regime with light charginos and $\mu > 0$, for which Eq. (\ref{eq:fillin}) and the Higgs mass bounds are fulfilled. Apart from this region, light charginos do not appear in the spectrum for this low values of $\tan\beta$. For $M_t \geq 175$ GeV, there is a clear distinction between positive values of $\mu$ (lower $m_{\tilde{t}_1}$) and negative values of $\mu$ (larger $m_{\tilde{t}_1}$). As can be seen from figure 5, for negative values of $\mu$ and $M_t < 185$ GeV, light charginos are not allowed, due to the constraints in the Higgs sector. Finally, for the last condition under study, for which $m_{H_2}^2(0) = 0$ and $m_{H_1}^2(0) = 2 m_Q^2(0) \equiv m_0^2$ (case II), the requirement of absence of tachyons in the stop sector differs from the other two cases only in the limit of large values of the soft supersymmetry breaking terms for the scalar fields at the grand unification scale. Since now both $m_Q^2$ and $m_U^2$ grow for large values of $m_0^2$, low values of the top squark masses may only be achieved for large values of the left--right mixing, which naturally arise in the low $\tan\beta$ regime. In particular, for $\tan\beta \simeq 1.3$, which approximately corresponds to $M_t \simeq 165$ GeV, and low values of the common gaugino mass $M_{1/2}$, in order to avoid problems in the stop spectrum, it is necessary to have $m_0^2 \geq 0.2 m_t^2$. This bound becomes stronger for lower values of $\tan\beta$. On the contrary, for large values of the top quark mass, $M_t \geq 175$ GeV, no bound on $m_0$ is obtained from these considerations. Figure 3 shows the dependence of the lightest stop quark mass on the lightest chargino mass for case II and four different top quark mass values. For low values of $M_t \leq 165$ GeV, the colour breaking constraints are sufficiently strong to put restrictions on large values of $m_0$, particularly for low values of the chargino mass ($M_{1/2}$) and positive values of $\mu$. For larger values of $M_t$, there is again a distinction between positive and negative values of $\mu$. Observe that, due to the larger mixing, lower values of the lightest stop are always more easily obtained for positive values of $\mu$. \subsection{Higgs Spectrum} Other important features of the spectrum at the infrared fixed point are associated with the Higgs sector. The Higgs spectrum is composed by three neutral scalar states --two CP-even, h and H, and one CP-odd, A, and two charged scalar states H$^{\pm}$. Considering the one loop leading order corrections to the running of the quartic couplings --those proportional to $m_t^4$-- and neglecting in a first approximation the squark mixing, the masses of the scalar states are given by, \begin{eqnarray} m^2_{h,H} &=& \frac{1}{2} \left[m_A^2 + M_Z^2 + \omega_t \right. \nonumber \\ & & \left. \pm \sqrt{\left(m_A^2 + M_Z^2 \right)^2 + \omega_t^2 - 4 m_A^2 M_Z^2 \cos^2(2 \beta) + 2 \omega_t \cos(2 \beta) \left(m_A^2 - M_Z^2 \right)} \right] \label{eq:mhH} \\ \nonumber \\ m_A^2 & = & m_1^2 + m_2^2 + \frac{\omega_t}{2} \nonumber \\ & =& \left[ m_{H_1}^2(0) + \left( \frac{ 2 m_Q^2(0) - m_{H_2}^2(0)}{2} \right) + 4 M_{1/2}^2 - \frac{\omega_t}{2} \right] \frac{(1+ \tan^2 \beta)}{(\tan^2 \beta -1)} \nonumber\\ \label{eq:mA} \\ m_{H^{\pm}}^2 & = & m_A^2 + M_W^2 \; . \label{eq:mHpm} \end{eqnarray} In the above, we have omitted the one loop contributions proportional to $\omega_t/ m_t^2$, since for $\tan\beta > 1$ they are negligible with respect to the other contributions. {}From Eq. (\ref{eq:mA}) it follows that, for lower values of $\tan \beta$, the value of the CP-odd eigenstate mass is enhanced. Moreover, larger values of $m_A$, implies as well that the charged Higgs and the heaviest CP-even Higgs becomes heavier in such regime. Indeed, for low values of $\tan\beta \leq 2$ ($M_t \leq 190$ GeV) and for the experimentally allowed range for the other mass parameters, the CP-odd Higgs is always heavier than 150 GeV. In this regime, the radiative corrections give only a relevant contribution to the lightest CP--even Higgs mass, Eq. (\ref{eq:mhH}). In fact, for these large values of $m_A$, $m_h$ acquires values close to its upper bound, which is independent of the exact value of the CP--odd mass \cite{Nir}--\cite{LNan}: \begin{equation} (m_h^{max})^2 = M_Z^2 \cos^2(2\beta) + \frac{3}{4 \pi^2} \frac {m_t^4}{v^2} \left[ \ln \left( \frac{m_{\tilde{t}_1} m_{\tilde{t}_2}}{m_t^2} \right) + \Delta_{\theta_{\tilde{t}}} \right] \label{eq:mhmax} \end{equation} In the above, we have now considered the expression in the case of non-negligible squark mixing $\Delta_{\theta_{\tilde{t}}}$ is a function which depends on the left-right mixing angle of the stop sector and it vanishes in the limit in which the two mass eigenstates are equal: $m_{\tilde{t}_1} = m_{\tilde{t}_2}$ \cite{Nir}-\cite{LNan}, \begin{eqnarray} \Delta_{\theta_{\tilde{t}}} & = &\left( m_{\tilde{t}_1}^2 - m_{\tilde{t}_2}^2 \right) \frac{\sin^2 2\theta_{\tilde{t}}} {2 m_t^2} \log \left( \frac{m_{\tilde{t}_1}^2}{m_{\tilde{t}_2}^2} \right) \nonumber\\ & + & \left( m_{\tilde{t}_1}^2 - m_{\tilde{t}_2}^2 \right)^2 \left( \frac{\sin^2 2 \theta_{\tilde{t}}}{4 m_t^2} \right)^2 \left[ 2 - \frac{m_{\tilde{t}_1}^2 + m_{\tilde{t}_2}^2} {m_{\tilde{t}_1}^2 - m_{\tilde{t}_2}^2} \log \left( \frac{m_{\tilde{t}_1}^2}{m_{\tilde{t}_2}^2} \right) \right], \end{eqnarray} where $\theta_{\tilde{t}}$ is the stop mixing angle. Furthermore, the infrared fixed point solution for the top quark mass has explicit important implications for the lightest Higgs mass. For a given value of the physical top quark mass, the infrared fixed point solution is associated with the minimun value of $\tan \beta$ compatible with the perturbative consistency of the theory. For values of $\tan \beta\geq 1$, lower values of $\tan \beta$ correspond to lower values of the tree level lightest CP-even mass, $m_h^{tree} = M_Z |cos 2 \beta|$. Therefore, the infrared fixed point solution minimizes the tree level contribution and after the inclusion of the radiative corrections it still gives the lowest possible value of $m_h$ for a fixed value of $M_t$ \cite{COPW}, \cite{BABE},\cite{Cartalk}, \cite{CEQR}. This property is very appealing, in particular, in relation to future Higgs searches at LEP2, as we shall show explicitly below. In figure 4 we present the values of the lightest Higgs mass as a function of the top quark mass at its infrared fixed point solution, for the case of universal boundary conditions, and performing a scanning over the mass parameters up to low energy squark masses of the order of 1 TeV. For comparison, we present the upper bounds on the Higgs mass which is obtained for larger values of $\tan\beta$. Observe that, for $M_t \leq 175$ GeV, there is approximately a difference of 30 GeV between the upper bound at and away from the top quark mass fixed point. As we shall discuss below in more detail, although the characteristic of the Higgs spectrum depend on the boundary conditions at the grand unification scale, these upper bounds have a more general validity. In general, the lightest CP-even Higgs mass spectrum is a reflection of the characteristics of the stop spectrum presented in Figs. 1 - 3. For the same chargino mass, larger Higgs mass values are obtained for positive values of $\mu$ than for negative values of $\mu$. Quite generally, the upper bound on the Higgs mass does not depend on the different structure of the boundary conditions of the scalar mass parameters at the grand unification scale. It reads $m_h \leq$ 90 (105) (120) GeV, for $M_t \leq 165 \; (175) \; (185)$ GeV. Fig. 5, 6 and 7 present the dependence of the lightest CP-even Higgs on the chargino mass for four different values of the top quark mass and for the case of universal scalar mass $m_0$ and for the cases of non--universal mass parameters I and II, respectively. Both the universal case and case II present similar features and are almost indistinguishable from the point of view of the lightest CP--even Higgs spectrum (Figures 5 and 7). For $M_t \simeq 165$ GeV, the Higgs mass becomes larger for a chargino mass $m_{\tilde{\chi}_1} \simeq 100$ GeV, than for moderate values of the chargino mass. The Higgs mass becomes, however, tightly bounded from above and it is always in the regime to be tested at LEP2. For larger values of the top quark mass, $M_t \geq 175$ GeV, and for negative (positive) values of $\mu$, the Higgs mass lies mostly within (beyond) the experimentally reachable regime. Observe that, even if a light chargino is observed at LEP2, $m_{\tilde{\chi}_1^+} < 90$ GeV, nothing guarantees the observation of the lightest CP--even Higgs, particularly for larger values of the top quark mass, $M_t \geq 175$ GeV. Case I (Fig. 6) is easily distinguishable from the above two cases, due to the more definite values of the Higgs mass related to the smaller allowed dependence on the scalar mass parameter $m_0^2$. Observe that, although the absolute upper bound on $m_h$ for a given $M_t$ does not significantly change, due to this particular structure of the high energy soft supersymmetry breaking mass parameters, the Higgs mass is in general pushed to lower values than in the case of universal parameters. Therefore, unlike these two cases, the observation of a light chargino at LEP2 would almost guarantee the observation of a light neutral Higgs if $M_t \leq$ 185 GeV in this case. \section{Precision Data Variables} In this section we shall define the experimental variables, which we shall use to analyze the implications of the infrared fixed point solution for the precision data analysis at LEP. In particular, we will follow the procedure of Altarelli, Barbieri, Caravaglios and Jadach \cite{ABJ}-\cite{ABC2}, which consist in parametrizing the electroweak radiative corrections in terms of four parameters: $\epsilon_1$, which is directly related to the $Z$-boson lepton width and is closely related to the parameter $\Delta\rho(0)$ usually defined in the literature \cite{Veltman}(see below), the parameter $\epsilon_2$, which is related to the parameter $\Delta r_W$, which measures the radiative corrections to the $W^{\pm}$ boson masses \cite{WS}, the parameter $\epsilon_3$, closely related to the radiative corrections to the weak mixing angle, and the parameter $\epsilon_b$, which is related to the radiative corrections to the $Z$-$b \bar{b}$ vertex \cite{epsbst}-\cite{Gautam}. In this work, we shall concentrate on the parameters $\epsilon_1$ and $\epsilon_b$ which are the only ones which keep a quadratic dependence on the top quark mass. This parametrization is based on the precise knowledge of $G_F$, $\alpha$ and $M_Z^2$, which are used as a basis for the precision data analysis. The variable $\epsilon_1$ may be directly obtained from the measurements of the $Z$--boson width and the forward--backward lepton asymmetries. Indeed the forward backward asymmetries may be parametrized in terms of the renormalized vector and axial lepton--$Z$ boson couplings, $g_V$ and $g_A$ in the following way: \begin{equation} A_{FB}^l = \frac{ 3 (g_V/g_A)^2 }{\left[ 1 + (g_V/g_A)^2 \right]^2 }. \end{equation} {}From $g_V/g_A$ is it possible to define an effective weak mixing angle \begin{eqnarray} \frac{g_V}{g_A} & = & 1 - 4 \sin^2 \theta_W^{eff} \nonumber\\ & = & 1 - 4 \left( 1 + \Delta k \right) s_0^2, \label{eq:gvga} \end{eqnarray} where \begin{equation} s_0^2 c_0^2 = \frac{ \pi \alpha(M_Z) }{ \sqrt{2} G_F M_Z^2 }. \end{equation} $\Delta k$ is a measure of the radiative corrections to the weak mixing angle, which are quadratically dependent on the top quark mass. Observe that the angle $s_0^2$ already contains the running between low energies and the energy scale $M_Z$. The total leptonic width may be also parametrized in terms of the axial and vector lepton couplings, \begin{equation} \Gamma_l = \frac{ G M_Z^3}{ 6 \pi \sqrt{2} } g_A^2 \left( 1 + \frac{ g_V^2 }{g_A^2} \right) \end{equation} {}From the knowledge of the asymmetris and the lepton width one can obtain the axial coupling \begin{equation} g_A^2 = \frac{1}{4} \left( 1 + \Delta\rho \right). \label{eq:ga2} \end{equation} Then, the variable $\epsilon_1 \equiv \Delta \rho$ receives four different contributions\cite{BFC}: \begin{equation} \epsilon_1 = e_1 - e_5 - \frac{\delta G}{G} - 4 \delta{g_A}, \end{equation} where $e_1 \equiv \Delta \rho(0)$ is given by, \begin{equation} e_1 = \frac{ \Pi_{33}(0) - \Pi_{WW}(0) }{M_W^2}, \end{equation} with $\Pi_{33}(0)$ and $\Pi_{WW}(0)$ the zero momentum vacuum polarization contributions to the $W_3$ and $W^{\pm}$ gauge bosons. In general, \begin{equation} \Pi_{i j}^{\mu\nu}(q) = -i g^{\mu\nu} \Pi_(q^2) + q^{\mu} q^{\nu} terms. \end{equation} with $i,j = W,\gamma,Z$ or $i,j$ = 0,3 for the $W_3$ or $B$ bosons, respectively. The term $e_5$ proceeds from the wave function renormalization constant of the $Z$ boson at $q^2 = M_Z^2$ and is given by \begin{equation} e_5 = \left\{ q^2 \left[\frac{d}{dq^2} \frac{\left( \Pi_{ZZ}(q^2) - \Pi_{ZZ}(0) \right)}{q^2} \right] \right\}_{q^2 = M_Z^2}. \label{eq:e5} \end{equation} The contribution of $e_1$ and $e_5$ include all the dominant vacuum polarization effects to the renormalized coupling $g_A$. Finally, the vertex and box corrections are included in the variables $\delta g_A$ and $\delta G/G$, as described, for example, in Ref. \cite{BFC}. The dominant contributions to the $\epsilon_1$ parameter are described in Appendix A. Using Eqs. (\ref{eq:gvga})--(\ref{eq:ga2}) and the precise values for $G_F$, $\alpha(M_Z)$ and $M_Z$ in the standard model, the variable $\epsilon_1$ is related with the asymmetries and the $Z$--boson leptonic width through the following expression \cite{Altatalk}, \begin{equation} \epsilon_1 = - 0.9882 + 0.01196 \frac{\Gamma_l}{MeV} - 0.1511 \frac{g_V}{g_A}. \end{equation} Analogously to the variable $\epsilon_1$, the variable $\epsilon_b$ may be defined as a function of the axial and vector couplings of the b--quark to the $Z^0$--boson. In the low $\tan\beta$ regime, the relevant contributions, quadratically dependent on the top quark mass, may be analysed in terms of only the coupling of the left handed bottom quarks to the $Z^0$ gauge boson. Formally, in this regime $\epsilon_b$ is defined from the relation \begin{equation} g_A^b = -\frac{1}{2} \left( 1 + \frac{\Delta \rho}{2} \right) \left( 1 + \epsilon_b \right), \end{equation} with \begin{equation} g_L^b = \left(-\frac{1}{2} + \frac{1}{3} \sin^2\theta^{eff}_W - \frac{\epsilon_b}{2} \right) \left( 1 + \frac{\Delta\rho}{2} \right), \end{equation} and \begin{equation} g_R^b = \frac{\sin^2\theta^{eff}_W}{3} \left( 1 + \frac{\Delta\rho}{2} \right). \end{equation} Experimentally, the variable $\epsilon_b$ can be best obtained from the ratio of the $Z \rightarrow b \bar{b}$ width to the total hadronic width. It can be shown that the branching ratio is given by \cite{Altatalk} \begin{equation} \frac{\Gamma_b}{\Gamma_h} \simeq 0.2182 \left[ 1 + 1.79 \epsilon_b - 0.06 \epsilon_1 + 0.07 \epsilon_3 \right], \end{equation} where the variable $\epsilon_3$ is defined as \begin{equation} \epsilon_3 = c_0^2 \Delta\rho + \left(c_0^2 - s_0^2\right) \Delta k. \end{equation} and depends only logarithmically on the top quark mass. The most relevant contributions to the variable $\epsilon_b$ in the low $\tan\beta$ regime and within the minimal supersymmetric standard model are described in Appendix B. In the above, we have given the dependence of the precision data variables, on the observables which are most sensitive to it. From the point of view of the experimental analysis, however, it is possible to extend the fit of the variables $\epsilon_1$, $\epsilon_3$ and $\epsilon_b$ by the introduction of other measured observables. This may be performed by, for example, including all purely leptonic quantities at the $Z^0$--pole, or the data on the $b$--quark from the forward--backward asymmetry, or simply to include all observables measured at the $Z^0$ peak at the LEP experiment. This last step may be performed by assuming that all relevant deviations from the standard model may be associated with either vacuum polarization effects or corrections to the $Z \rightarrow b \bar{b}$ vertex. The global fit to the data reduces the dependence on any single experiment and hence provide a more realistic estimate of the precision data variables. For the comparison of the theoretical results to the experimental data, we shall use the values of the variables which are obtained from these extended fits at the 90 $\%$ confidence level. \section{Indirect Signals of Supersymmetric Particles} In this section we shall investigate the possible experimental signature of supersymmetric particles in the variables $\epsilon_1$, $\epsilon_b$ and the rate of the rare $b$ decay, $b \rightarrow s \gamma$. We shall study this in the case of universality of the soft supersymmetry breaking parameters at the unification scale and in the two characteristic cases of non-universal soft supersymmetry breaking scalar mass parameters discussed in sections 3 and 4 (cases I and II). A related analysis, within the framework of superstring--inspired $SU(5) \times U(1)$ supergravity models, was recently performed in Ref. \cite{LNPZ}. Before analysing each case in detail, let us summarize the most relevant supersymmetric effects in these three experimental variables. The main supersymmetric contributions to the variable $\epsilon_1$ comes from the chargino and stop sectors and are summarized in Appendix A. The stop contribution is always positive, and becomes relevant whenever there are light stops, with masses $m_{\tilde{t}} < 300$ GeV and with a non-negligible component in the $\tilde{t}_L$ squark. Due to the renormalization group behaviour of the mass parameters $m_Q^2$ and $m_U^2$, $m_Q^2$ is always larger than $m_U^2$ at low energies (see Table 1 and Eq. (\ref{eq:massp}) ) and, hence, in the cases under analysis the light stop has a dominant right handed component. A left handed component appears mainly through the mixing, which does not strongly affect the behaviour of $\Delta\rho(0)$ \cite{BM}. Hence, even in the case of light stops, with masses lower than 100 GeV, the potentially large positive contributions to the $\rho$ parameters are in general suppressed. Light charginos, instead, give a negative contribution to $\epsilon_1$, which become large if the lightest chargino mass $m_{\tilde{\chi}^+_l} < 70 $ GeV. Since in most cases, light stops may only appear when charginos with masses close to the present experimental bounds are present in the spectrum, the light stop effect is in general screened by the chargino contribution. The main contributions to the variable $\epsilon_b$ in the minimal supersymmetric standard model in the low $\tan\beta$ regime come from the standard $W^{\pm}$--top loop, the charged Higgs--top and the chargino--stop loops and are summarized in Appendix B. The charged Higgs contribution pushes $\epsilon_b$ in the same direction as the standard model one, while the chargino contributions tend to suppress the standard model corrections to the $Z^0 - b \bar{b}$ vertex. In the models under consideration, for $M_t < 185$ GeV ($\tan\beta < 2$), the charged Higgs is sufficiently heavy to give only a moderate contribution to the $\epsilon_b$ variable. The chargino contribution, instead, may become sizeable, particularly when light charginos and light stops are present in the spectrum. The largest chargino contributions, quadratically dependent on the top quark mass, appear in the case in which the lightest stop has a relevant right handed component and the lightest chargino has a relevant component in the charged Higgsino. Although the first condition is mostly satisfied for the cases of soft supersymmetry breaking terms under consideration, due to the large values of $\mu$, the lightest chargino has a dominant wino component, which reduces the supersymmetric effects on $\epsilon_b$. Still, as we shall show, relatively large effects are still possible. The decay rate $b \rightarrow s \gamma$ receives also contributions from the standard $W^{\pm}$--top loop, the charged Higgs--top loop and the chargino--stop loops. The predictions for this decay rate within the Standard Model has been recently analysed by several authors \cite{bsganal}. A general expression for the supersymmetric contributions has been presented in Refs. \cite{BBMR} and \cite{BG2}, and we shall not rewrite it here. The relevant properties are the following: As in the case of $\epsilon_b$, the charged Higgs contribution tends to enhance the standard model signal. In the supersymmetric limit, $\mu = 0$ and $\tan\beta = 1$, the stop-chargino contribution exactly cancels the charged and standard model ones and the total rate is zero. Although for the experimentally preferred values of $M_t \simeq 174 \pm 17$ GeV \cite{CDF} cases under consideration, its infrared fixed point solution yields values of $\tan\beta$ close to one, the values of the sparticle masses are far away from their supersymmetric expressions. Indeed, large values of $\mu$ are obtained and the soft supersymmetry breaking terms are in general not negligible. Furthermore, in the cases analysed in this work, the supersymmetric contribution singles out the sign of the mass parameter $\mu$. For moderate positive values of $\mu$ there is a large suppression of the standard model decay rate, while for moderate not negative values of $\mu$ the branching ratio tends to be enhanced (The dependence on the sign of $\mu$ is stronger in the large $\tan\beta$ regime, $\tan\beta \geq 30$ \cite{bsganal},\cite{wefour}, which will not be analysed in the present work). In addition, as we discussed section 5, for positive values of $\mu$, due to the larger values of the stop mixing, it is easier to obtain smaller stop masses without being in conflict with the experimental bounds on the lightest CP-even Higgs mass ($m_h \geq 60$ GeV, for $m_A \geq 150$ GeV). For a given value of $M_t$ ($\tan\beta$) larger values of $\mu$ are associated with heavy sparticles and hence the Standard Model decay rate tends to be recovered (see figure 12). \subsection{Dependence of the precision data variables on the light chargino mass} Figure 8 shows the dependence of the parameter $\epsilon_1$ for the case of universal scalar masses at the grand unification scale and for three different values of the top quark mass. We observe that the qualitative features do not depend on the top quark mass: A departure from the Standard Model prediction occurs only for light chargino masses $m_{\tilde{\chi}_l^+} < 100$ GeV. As we discussed above, due to the small left handed component of the lightest stop, the main contribution is mainly negative, and this remain a general feature independently of the exact value of the top quark mass. Comparing the theoretical predictions with the recent fit to the LEP and SLD data \cite{Alfit}, \begin{equation} \epsilon_1 = ( 3.5 \pm 2.9 ) \times 10^{-3} \label{eq:eps1} \end{equation} at the 90 $\%$ confidence level (1.64 standard deviations), we see that, while light charginos with masses $m_{\tilde{\chi}_1^+} < 70$ GeV are not in conflict with the present data, only for large values of the top quark mass $M_t \geq 185$ GeV, are they preferred to heavier ones. On the other hand, very light charginos, with masses $m_{\tilde{\chi}_1} \leq 50$ GeV are disfavoured by the present data. It is important to remind the reader, however, that the present analysis looses its validity for $m_{\tilde{\chi}_1} \leq 48$ GeV and hence the predictions for chargino masses lower than 50 GeV cannot be fully trusted. For the case of non--universal soft supersymmetry breaking scalar mass parameters at the grand unification scale the main features of the universal case are preserved. In figures 9 and 10 we show the dependence of $\epsilon_1$ on the chargino mass for the cases I and II, respectively, and a top quark mass $M_t = 175$ GeV. We see that, in spite of the quite different characteristics of the stop spectrum with respect to the universal case (see figures 1--3), no significant difference is observed with respect to the behaviour depicted in figure 8. In figure 11 we present the behaviour of $\epsilon_b$ as a function of the lightest chargino mass for the case of universality of the soft supersymmetry breaking parameters and for three different values of the top quark mass. As in the case of $\epsilon_1$, a significant departure from the Standard Model predictions may only be observed if the lightest chargino mass acquires rather small values, $m_{\tilde{\chi}_1} < 100$ GeV. However, in the presence of light charginos, the supersymmetric predictions for $\epsilon_b$ show a larger spreading of values for the precision data variable $\epsilon_b$ than for $\epsilon_1$. This is related to the dependence of $\epsilon_b$ on the lightest stop mass. Indeed, due to the large component of the lightest stop on the right handed top squark, $\epsilon_b$ gets significantly changed for lower values of $m_{\tilde{t}_1}$. Since for $M_t \geq 175$ GeV, light stops are only possible for $\mu > 0$, the largest values of $\epsilon_b$ are associated with positive values of $\mu$. Taking into account the recent fit to the variable $\epsilon_b$ \cite{Alfit}, \begin{equation} \epsilon_b = ( 0.9 \pm 6.8 ) \times 10^{-3} \label{eq:epsb} \end{equation} at the $90 \%$ confidence level, we see that the present data tends to prefer a light chargino, with mass $m_{\tilde{\chi}_1} \leq 80$ GeV. Observe that, for $\epsilon_b$ we take the fit to all LEP and SLD data, instead of taking the particular value obtained from the partial width $\Gamma_b/\Gamma_h$. If we just fit $\epsilon_b$ with this last variable according to the last reported data, $\Gamma_b / \Gamma_h = 0.2202 \pm 0.0020$ \cite{Schaile} we would get a larger central value, but also a larger error at the $90 \%$ confidence level, $\epsilon_b = (5.1 \pm 8.4)\; 10^{-3}$. Tighter bounds on the spectrum would be obtained if we took this latter value to perform our analysis. Figure 10 shows the dependence of $\epsilon_b$ as a function of the lightest chargino mass for case II and a top quark mass $M_t = 175$ GeV. The characteristic features of this case are similar to the case of universal soft supersymmetry breaking parameters. Only a smaller concentration of points with larger values of $\epsilon_b$ is observed, related to the larger values of $m_U^2$ for the same value of the chargino mass parameters (see Table 1), which imply a smaller right handed component of the lightest stop. Finally, in case I, larger values of the variable $\epsilon_b$ than in the other two cases may be obtained. In Figure 9 we display the corresponding dependence of $\epsilon_b$ as a function of the chargino mass for this case, with a top quark mass $M_t = 175$ GeV. Observe that values of $\epsilon_b$ close to zero are ppossible in this case. This is due to the fact that the right handed stop mass parameter $m_U^2$ can take very small values and the right handed component of the lightest stop increases. In principle, a light stop may be obtained in this case for sufficiently low values of $m_U^2$, even when the mixing is negligible. However, large negative values of $m_U^2$ induce an unacceptable color breaking minimum, Eq. (\ref{eq:minimu}), and hence light stops and larger values of $\epsilon_b$ are only possible for positive values of $\mu$, as in the universal case. Again, agreement of the theoretical results with the present experimental data at the 90 $\%$ confidence level may only be obtained for sufficiently light charginos $m_{\tilde{\chi}_1^+} < 70$ GeV. \section{On the $b \rightarrow s \gamma$ decay rate} In figure 12 we present the behaviour of the ratio of the prediction for the decay rate $b \rightarrow s \gamma$ to the standard model one, as a function of the supersymmetric mass parameter $\mu$ for the universal case and for three different values of the top quark mass $M_t$. As we discussed before, a clear dependence of $b \rightarrow s \gamma$ on the sign of $\mu$ is observed. For negative values of $\mu$ and a fixed value of the top quark mass $M_t \leq 175$ GeV, most of the theoretical predictions are close to the Standard Model ones, with a decay rate varying between 0.7 and 1.4 times the Standard Model prediction. The maximum departure is always noticed for the smallest values of $|\mu|$, associated with a light spectrum. For larger values of the top quark mas $M_t \geq 185$ GeV, a larger departure is possible, with a relative decay rate which may be close to two. Recently, the experimental value of the $b \rightarrow s \gamma$ decay branching ratio has been reported \cite{bsgexp}, \begin{equation} BR(b \rightarrow s \gamma) = (2.32 \pm 0.97) \times (1 \pm 0.15) \times \left[1 - (M_b - 4.87)\right] \times \; 10^{-4} \label{eq:bsg} \end{equation} where the second error is systematical, the bottom mass is given in GeV, and all errors have been treated at the $90 \%$ confidence level ($1.64 \sigma$ deviations). The above range allows, in principle, to put constraints in the supersymmetric spectrum. There are, however, large theoretical uncertainties associated with the standard model predictions, which for a top quark mass in the range $M_t \simeq 165$--185 GeV, and at the $90 \%$ confidence level reads \cite{Buras}, \begin{equation} BR(b \rightarrow s \gamma) (SM) \simeq ( 3.1 \pm 1.5 ) \times 10^{-4}, \end{equation} with a small dependence of the central value on the top quark mass ($\Delta BR(b \rightarrow s \gamma \simeq \pm 0.1 \; 10^{-4}$), which is negligible in comparison to the theoretical error associated with QCD uncertainties. Hence, the presently allowed values for the relative decay rate at the 90 $\%$ confidence level translates into: \begin{equation} 0.25 \leq \frac{BR(b \rightarrow s \gamma)}{BR(b \rightarrow s \gamma) (SM)} \leq 2.5, \end{equation} Observe that, to obtain the allowed range, we have minimized the theoretical uncertainty related to the bottom mass ($M_b = 4.9 \pm 0.3$ GeV) \cite{Partd}. Had we included this uncertainty, the range would be slightly larger than the one considered above. We believe, however, that the above gives a conservative estimate of the experimental values allowed at present, and it agrees quantitatively level with the one reported in Ref. \cite{bsgexp}. Hence, the relatively large values of the decay rate obtained for $M_t \simeq 185$ GeV are still acceptable when all uncertainties are taken into account. For positive values of $\mu$, instead, the supersymmetric model tends to predict values of the decay rate smaller than in the Standard Model. The lower values of the stop mass associated with positive values of $\mu$ ( and hence, with a larger mixing ) contribute to this behaviour, since they enhance the negative chargino--stop loop contributions. In fact, for $M_t \leq 175$ GeV, both stops and charginos may be sufficiently light and the $b \rightarrow s \gamma$ decay rate may acquire very low values. As in the case of negative values of $\mu$, however, apart from a few solutions for $M_t \simeq 165$ GeV, the present uncertainties do not allow to put strong bounds on these models for any of the values of $M_t$ considered in Fig. 12. For the case of non--universal parameters at the grand unification scale, cases I and II, the qualitative behaviour is the same as in the case of universal soft supersymmetry breaking parameters. In figures 9 and 10 we present the results for the relative decay rate as a function of $\mu$ for cases I and II, respectively, and a top quark mass $M_t = 175$ GeV. In case I lower values of the relative decay rate than in the universal case are possible for positive values of $\mu$, and some of the predictions lie outside the experimentally allowed range. Due to the weak dependence of $\mu$ on $m_0$, $\mu$ is strongly correlated with the lightest chargino mass in this case, and hence, the solutions, which are experimentally excluded by these considerations, correspond to very light chargino mass values. As we shall see in the section 9, these are just the solutions which tend to give larger values of $\epsilon_b$. In case II, instead, the theoretical predicted range is similar to the one predicted in the case of universal mass parameters, and $b \rightarrow s \gamma$ remains in the experimentally allowed range for all acceptable values of $\mu$. \section{Correlated fit to the Data} In section 8, we present the theoretical predictions for different experimental variables as a function of relevant supersymmetric mass parameters. However, we did not discuss the correlations between the different variables, which become essential at the point of considering the experimentally allowed models. For instance, models with a value of $\epsilon_b$ closer to the present experimental central value may be in conflict with either the bounds on $b \rightarrow s \gamma$ or, since they are always obtained in the presence of light charginos, they may be in conflict with the present bounds on the $\epsilon_1$ variable. It is the purpose of this section to analyze these correlations. In figure 13 we give the correlation between $\epsilon_b$ and $\epsilon_1$ for the case of universal mass parameters and for three different values of the top quark mass $M_t$. We see that larger values of $\epsilon_b$ are necessarily associated with relatively smaller values of $\epsilon_1$, although for $M_t \leq 175$ GeV there are a few solutions for which $\epsilon_1$ remains at moderate values ($\epsilon_1 \simeq 1$--$2 \times 10^{-3}$) and $\epsilon_b$ is relatively large ($\epsilon_b \simeq -3 \times 10^{-3}$). These solutions are associated with light stops ($m_{\tilde{t}_1} < 150$ GeV) and light charginos ($m_{\tilde{\chi}_1} < 70$ GeV), which are not too close to the $Z^0$ boson mass threshold. For $M_t \geq 185$ GeV, stops are heavy and all solutions lie beyond the present $90 \%$ confidence level for $\epsilon_b$. In fact, not only the standard model prediction further decreases with respect to lower top quark masses, but also the deviations with respect to the standard model prediction are smaller in this case. The variable $\epsilon_1$, instead, can vary within a large range of values, depending on the lightest chargino mass. In figure 14 we show the correlation between $\epsilon_b$ and $\epsilon_1$ for cases I and II and for a top quark mass $M_t = 175$ GeV. Most of the properties of the case with universal mass parameters are preserved in these two cases. However, for acceptable values of $\epsilon_1$, larger values of $\epsilon_b$ may be obtained in case I, while in case II smaller values of $\epsilon_b$ predicteded. These properties may be easily understood from the characteristics of the stop and chargino spectra shown in figures 1--3. Observe that, values of $\epsilon_b \simeq -2 \; 10^{-3}$ may be obtained in case I for acceptable values of $\epsilon_1 \simeq 1$--$2 \; 10^{-3}$. Observe that due to the behaviour of $\epsilon_1$ for chargino masses $m_{\tilde{\chi}_1^+}$ very close to their production threshold at the $Z^0$ peak (see figure 8), our scanning shows few solutions in figure 14 for values of $\epsilon_1 \leq 2 \times 10^{-3}$. To fill that area with solutions woud demand a very fine scanning for values of $m_{\tilde{\chi}^+} < 60$ GeV. Also interesting is the correlation between $\epsilon_b$ and $b \rightarrow s \gamma$, which we depict in figure 15 for the case of universal mass paramters at $M_{GUT}$ and three values of the top quark mass. For negative values of $\mu$ (see also figure 12), larger values of $\epsilon_b$ are only possible for $M_t \leq 165$ GeV, for which perfectly acceptable values of $b \rightarrow s \gamma$ are obtained. Observe, however, that for $M_t \simeq 165$ GeV, the combination of the bounds on $\epsilon_b$, $\epsilon_1$ and $b \rightarrow s \gamma$ restricts $\epsilon_b < -3.2 \times 10^{-3}$ in this case. For $M_t \simeq 175$ GeV, $b \rightarrow s \gamma$ does not impose additional constraint but the bounds on $\epsilon_1$ are strong enough to constraint $\epsilon_b < -3.6 \times 10^{-3}$ in this case. Much smaller values of $\epsilon_b$ are predicted for $M_t \geq 185$ GeV. Figure 16 shows the correlation of $b \rightarrow s \gamma$ with $\epsilon_b$ for the cases of non--universal mass parameters I and II and a top quark mass $M_t = 175$ GeV. We see that, unlike the case of universal mass parameters, in case I the experimental range for $b \rightarrow s \gamma$ puts additional constraints on the spectrum. The variable $\epsilon_b$ can still take values lower than in the standard model, but still away from zero. For $M_t \simeq 175$ GeV, the correlated fit leads to a value of $\epsilon_b < - 2.5 \; 10^{-3}$. As in the case of universal mass parameters at $M_{GUT}$, no significant variation of this bound is obtained for lower values of the top quark mass, while for larger values of the top quark mass $\epsilon_b$ tends to lower values. Finally, from the point of view of the range of allowed values for the experimental variables, case II is equivalent to the case of universal mass parameters, once the full experimental constraints considered in this work are taken into account. \section{Conclusions} In the present work, we have analysed the theoretical predictions for the Higgs and supersymmetric spectrum and their indirect experimental signals at the top quark mass infrared fixed point solution for different boundary conditions of the scalar mass parameters at the grand unification scale. We have shown that even though the stop mass range significantly changes for different boundary conditions, the predicted lightest CP-even Higgs mass range remains unchanged, leading to rather general upper bounds for this mass, $m_h \leq 90 \; (105) \; (120)$ GeV for $M_t \leq 165 \; (175) \; (185)$ GeV. The correlation between the lightest Higgs mass and the chargino spectrum, however, depends on the chosen high energy boundary conditions for the mass parameters. Interesting enough, for $M_t \geq 175$ GeV, the observation of a light chargino at LEP2, does not guarantee the observation of the lightest CP-even Higgs mass, particularly for positive signs of $\mu$ for which the mixing is maximized. However, for $M_t < 185$ GeV, light stops may appear in the spectrum in this case. The allowed stop spectrum in the presence of a light chargino, strongly depends on the high energy boundary conditions. For two of the cases considered, the case of universal scalar mass parameters at $M_{GUT}$ and the case I, for which the dominant dependence of the supersymmetric mass parameter $\mu$ on the scalar mass parameters vanishes, and a top quark mass $M_t \leq 175$ GeV, a light chargino, with mass $m_{\tilde{\chi}_1} \leq 70$ GeV is always associated with a light stop, with mass $m_{\tilde{t}_1} \leq 150$ GeV. In case II, for which the right handed stop mass parameter increases with the supersymmetry breaking scalar mass parameter $m_0$, heavier stops may appear together with light charginos. The experimental variables analysed in this work are a reflection of the characteristics of the Higgs and supersymmetric spectrum. The variable $\epsilon_1$ receives a significant negative correction only for low values of the chargino mass $m_{\tilde{\chi}_1^+} \leq 70$ GeV. The potentially large positive correction associated with the stop spectrum is mostly suppressed due to the relatively small left handed component of the lightest stop. These properties do not strongly depend on the different boundary conditions analysed in the present work. The variable $\epsilon_b$ receives also a significant correction, with respect to the Standard Model prediction only for sufficiently light charginos, $m_{\tilde{\chi}_1^+} < 100$ GeV. The correction is mainly positive, rendering $\epsilon_b$ closer to the experimentally allowed range than in the Standard Model case. Due to the large component of the lightest stop on the right handed top squark, the variable $\epsilon_b$ depends also on the lightest stop mass. Hence, it is mostly larger for positive values of $\mu$, for which lighter stops are possible, particularly for $M_t \geq 175$ GeV. Finally, the corrections to the decay rate $b \rightarrow s \gamma$ are also maximized in the case of light charginos and light stops. This experimental variable has a strong dependence on the sign of $\mu$. For positive values of $\mu$, the prediction for the decay rate is generally larger than the standard model one, while for negative values of $\mu$ it is generally smaller. In the case of universal soft supersymmetry breaking scalar mass parameters, values of $\epsilon_b \simeq -2 \; 10^{-3}$ may be obtained for sufficiently low values of the chargino masses. However, these values are achieved for very low values of the chargino masses and are in conflict with the experimental value of the variable $\epsilon_1$ at the 90 $\%$ confidence level. Due to the present theoretical uncertainties in the computation of $BR(b\rightarrow s \gamma)$, the recent experimental measurement of this branching ratio yields no relevant additional constraints on the allowed mass parameters in the case of universal mass parameters at $M_{GUT}$. In general, for $M_t \geq 165$ GeV, the allowed values for the variable $\epsilon_b < -3.2 \times 10^{-3}$ in this case. Values compatible with the present experimental bounds on $\epsilon_b$ at the 90 $\%$ confidence level are always associated with light charginos $m_{\tilde{\chi}_1} < 100$ GeV and values of the variable $\epsilon_1$ which are lower than the standard model prediction, but are mostly consistent with the present experimental data. In fact, the theoretical predictions for $\epsilon_1$, within the experimentally allowed range for all variables, reads $\epsilon_1 \simeq 0.6$--$5 \; 10^{-3}$. The decay rate $b \rightarrow s \gamma$ stays in the experimentally acceptable range, with values which tend to be mostly lower than in the Standard Model case. In the case $m_{H_1}^2(0) = 0$, $m_{H_2}^2(0) = 2 m_Q^2(0)$ (case I), many of the above discussed features are preserved, although larger values of $\epsilon_b$ are possible. Values of $\epsilon_b \simeq 0$, which are not in conflict with the bounds on the spectrum, lead however to too low values of either $\epsilon_1$ or the branching ratio $BR(b \rightarrow s \gamma)$. In general, for $M_t \geq 165$ GeV, $\epsilon_b < -2.5 \; 10^{-3}$ in this case. As in the universal case, consistency with the present experimental bounds lead to light charginos, values of the variable $\epsilon_1 \simeq 0.6$--$5 \times 10^{-3}$ and a $b \rightarrow s \gamma$ decay rate, which is mostly below the standard model prediction. Finally, in the case $m_{H_2}^2(0) = 0$, $m_{H_1}^2(0) = 2 m_Q^2(0)$ (case II), the bounds on the $\epsilon$ parameters are equivalent to the ones found in the case of universal conditions at the grand unification scale. The discrepancy between the experimentally allowed value of $\epsilon_b$ and the standard model prediction is mostly due to the lack of agreement of the standard model prediction for the branching ratio $\Gamma_b/\Gamma_h$ and the corresponding experimental value. Indeed, the standard model prediction for $\epsilon_b$ lies beyond the experimental value at 90 $\%$ confidence level. The determination of this partial width is, however, a delicate experimental problem and there are some unresolved issues related to it. Hence, it is still premature to claim evidence of new physics based only on the $\epsilon_b$ variable. If the present tendency is mantained after these issues are solved, the low energy supersymmetric grand unified models have the power of closing the gap between theory and experiment. This will demand light charginos and light stops. If this is the case, we should see supersymmetric particles either at LEP2 or at the next Tevatron run. Hence, within the phenomenologically attractive scenario of minimal supersymmetric grand unified theories, if the present experimental bounds on $\epsilon_b$ were mantained, the above property, together with the tight upper bounds on the Higgs mass, promises a potentially rich phenomenology at present and near future colliders. {}~\\ {}~\\ {}~\\ {\bf{Acknowledgements}} The authors would like to thank G. Altarelli, R. Barbieri, E. Lisi and J. Sola for very interesting discussions. This work is partially supported by the Worldlab. \\ {}~\\ {}~\\ {\bf{Note added in proof}} After this work was completed, two independent works have appeared \cite{KP}, in which the behaviour of the variable $\epsilon_b$ within the minimal supersymmetric model is analysed. \newpage {}~\\ {\large{\bf {Appendix A.}}} \\ {}~\\ In this appendix we describe the largest contributions to the parameter $\epsilon_1$ in the minimal supersymmetric standard model. If charginos are sufficiently heavy, $m_{\tilde{\chi}^+_l} \geq 80 GeV$, the only large supersymmetric contributions to the parameter $\epsilon_1$ comes from the stop--sbottom sector. This contribution is analogous to the dominant one coming from the top--bottom left handed multiplet, which reads, \begin{equation} \epsilon_1^{t-b} = \frac{3 \; \alpha}{16 \pi \; \sin^2\theta_W \; M_W^2} \left[ M_t^2 + M_b^2 - \frac{2 M_t^2 M_b^2}{M_t^2 - M_b^2} \ln \left( \frac{M_t^2}{M_b^2} \right) \right]. \label{eq:eps1mt} \end{equation} Due to the large hierarchy between the top and the bottom masses, the above expression, Eq. (\ref{eq:eps1mt}) is completely dominated by the first term inside the bracket. Concerning the stop--sbottom sector, in principle, only the supersymmetric partners of the left handed top and bottom quarks contribute to $\epsilon_1$. However, due to the squark mixing governed by the $A_t$ and $\mu$ parameters, these are not the mass eigenstates of the model. In terms of the mass eigenstates $m_{\tilde{t}_{1,2}}$ and $m_{\tilde{b}_{1,2}}$ the dominant stop - sbottom contribution to $\epsilon_1$ is given by \cite{WS} \begin{eqnarray} \epsilon_1^{\tilde{t}-\tilde{b}} & = & \frac{3 \; \alpha}{16 \pi \sin^2\theta_W \; M_W^2} \left( T_{1 1}^2 \; g(m_{\tilde{t}_1}, m_{\tilde{b}_1}) \label{eq:tbeps1} \right. \nonumber\\ & + & \left. T_{1 2}^2 \; g(m_{\tilde{t}_2},m_{\tilde{b}_1}) - T_{1 1}^2 \; T_{1 2}^2 \; g(m_{\tilde{t}_1}, m_{\tilde{t}_2}) \right), \label{eq:eps1mst} \end{eqnarray} where $T_{ij}$ is the mixing matrix which diagonalizes the stop mass matrix: \begin{equation} T {\cal{M}}_{st} T^{\dagger} = {\cal{M}}^D_{st}. \end{equation} In the above, we have neglected the sbottom mixing angle, identifying $\tilde{b}_L \equiv \tilde{b}_1$; this is an excellent approximation for the low values of $\tan\beta$ we are considering. The function $g(m_1,m_2)$ is directly related to the dependence of the variable $\epsilon_1$ on the top and bottom masses, Eq.(\ref{eq:tbeps1}), \begin{equation} g(m_1,m_2) = m_1^2 + m_2^2 - \frac{2 m_1^2 m_2^2}{ m_1^2 - m_2^2} \ln \left( \frac{m_1^2}{m_2^2} \right). \end{equation} In the supersymmetric limit, $A_t = \mu = 0$, $\tan\beta = 1$, the squark mixing vanishes, and, the weak eigenstates become mass eigenstates with masses equal to their standard model partners. It is easy to verify that the contribution to the parameter $\epsilon_1$ of the stop--sbottom sector becomes equal to the one of the top--bottom sector in this limit. On the other hand, for small mixing and a soft supersymmetry breaking parameter $m_Q^2 \gg m_t^2$, \begin{equation} \epsilon_1^{\tilde{t} - \tilde{b}} \simeq \epsilon_1^{t - b} \frac{m_t^2}{3 m_Q^2}. \end{equation} Hence, for sufficiently large values of the squark masses, the squark contribution to the $\epsilon_1$ parameter vanishes. The sleptons give a similar contribution to the parameter $\epsilon_1$, although it is reduced by a factor 3 with respect to Eq. (\ref{eq:eps1mst}), due to the color factor. The only additional contribution that can become large is the chargino one, if their masses are close to the production threshold at the $Z^0$ peak, $m_{\tilde{\chi}_l} \simeq M_Z/2$. The derivative of the chargino vacuum polarization contribution goes to large values if the chargino masses approach the production threshold. Indeed, it behaves like \begin{equation} \Pi^{{'}} (M_Z^2) \simeq \left( M_Z^2 - 4 m_{\tilde{\chi}^+} \right)^{-1/2}. \label{eq:threshold} \end{equation} The above expression formally diverges if charginos masses tend to $M_Z/2$. However, Eq. (\ref{eq:e5}) losses its validity when $m_{\tilde{\chi}^+} - M_Z/2 < \Gamma_Z$, what means that it can only be trusted if the chargino masses are above 50 GeV \cite{BFC}. In the following, we shall give here the dominant contribution to $\Pi^{'}(M_Z^2)$ for sufficiently light charginos. The diagonalization of the chargino mass matrix is performed by a bi--unitary tranformation \begin{equation} U^* \; {\cal{M}}_{ch} V^\dagger = {\cal{M}}_D. \end{equation} We can define the new matrices \cite{GS} \begin{eqnarray} U_{i j}^L & = & \frac{1}{2} U^*_{i 2} U_{j 2} - \cos^2\theta_W \delta_{i j} \nonumber\\ U_{i j}^R & = & \frac{1}{2} V_{i 2} V^*_{j 2} - \cos^2\theta_W \delta_{i j} \nonumber\\ X_{i j} & = & U^L_{i j} U^{L *}_{i j} + U^R_{i j} U^{R *}_{i j} \nonumber\\ Y_{i j} & = & U^L_{i j} U^{R *}_{i j} + U^R_{i j} U^{L *}_{i j} \label{eq:matrices} \end{eqnarray} Then, the dominant (formally divergent in the limit $m_{\tilde{\chi}^+} \rightarrow M_Z/2$) chargino contributions to the $\epsilon_1$ parameter are included in the definition of the variable $e_5$, Eq. (\ref{eq:e5}), and are given by \begin{eqnarray} e_5 & = & \frac{2 g_2^2}{\cos^2\theta_W} \sum_{i,j} \left\{ X_{i j} \left[ 2 M_Z^2 \left( B^{'}_{21} (M_Z^2, M_i, m_j) \right. \right. \right. \nonumber\\ & - & \left. \left. \left. B^{'}_1(M_Z^2, M_i, M_j) \right) + ( M_j^2 - M_i^2) B_1^{'}(M_Z^2, M_i, M_j) \right. \right. \nonumber\\ & + & \left. \left. M_i ( M_i X_{i j} - M_j Y_{i j} ) B_0^{'}(M_Z^2,M_i,M_j) \right] \right\} , \end{eqnarray} where $B_i^{'}$ simbolize the derivatives of the corresponding Passarino--Veltman function \cite{PV}, which are given by \begin{eqnarray} B_0^{'}(M_Z^2,m_1^2,m_2^2) & = & \frac{1}{16 \pi^2} \int_0^1 dx \; \frac{ x ( 1 - x )}{ \chi(m_1^2,m_2^2,x)} \nonumber\\ B_{1}^{'}(M_Z^2,m_1^2,m_2^2) & = & \frac{1}{ 16 \pi^2} \int_0^1 dx \; \frac{ x^2 ( 1 - x)}{ \chi(m_1^2,m_2^2,x) } \nonumber\\ B_{21}^{'}(M_Z^2,m_1^2,m_2^2) & = & \frac{1}{16 \pi^2} \int_0^1 dx \; \frac{ x^3 ( 1 - x )}{\chi(m_1^2,m_2^2,x)} , \end{eqnarray} where \begin{equation} \chi(m_1^2,m_2^2,x) = m_1^2 + (m_2^2 - m_1^2 - M_Z^2) x + M_Z^2 \; x^2 \label{eq:chi} \end{equation} Observe that, for $m_1^2 = m_2^2$, the argument \begin{equation} \chi(m_1^2,m_1^2,x) = M_Z^2 \left[ (x - 1/2)^2 + (m_1^2/M_Z^2 - 1/4) \right], \label{eq:chieq} \end{equation} and the derivative of the Passarino--Veltman functions listed above become hence singular for $m_1^2 \rightarrow M_Z^2/4$. \newpage {}~\\ {\large{\bf {Appendix B.}}} \\ In this appendix, we include the relevant formulae for the computation of the parameter $\epsilon_b$ in the minimal supersymmetric standard model for the low $\tan\beta$ regime. The main standard contribution come from the Standard top quark - $W^+$ one loop diagram. This may be expressed, within an excellent approximation for $M_t \geq $ 160 GeV, as a series in the parameter $r = M_t^2/M_W^2$, namely \cite{epsbst} \begin{eqnarray} \epsilon_b^{SM} & = & - \frac{ \alpha } {8 \pi \sin^2\theta_W} \left[ r + 2.88 \log(r) - 6.716 + \frac{\left( 8.368 \log(r) - 3.408 \right)}{r} \right. \nonumber\\ & + & \left. \frac{ \left( 9.126 \log(r) + 2.26 \right) }{r^2} + \frac{\left( 4.043 \log(r) + 7.41 \right)}{r^3} \right] \end{eqnarray} In the low $\tan\beta$ regime, the main contributions to the $Z - b \bar{b}$ vertex, associated with the Higgs and supersymmetric particles come from the charged Higgs contribution, which tends to enhance the Standard Model signal, and the one coming from the chargino--stop one loop contribution, which tends to reduce the Standard Model signal. The charged Higgs contribution is given by \cite{epsbs} \begin{equation} \epsilon_b^{H^+} = - \frac{ \alpha }{2 \pi \sin^2\theta_W} F_b^{H^+} \end{equation} with \begin{eqnarray} F_b^{H^+} & = & \frac{M_t^2}{2 M_W^2 \tan^2\beta} \left[ b_1(m_{H^+},M_t,M_b^2) v^{(t)}_L + \left( \frac{M_Z^2}{\mu_R^2} c_6(m_{H^+},M_t,M_t) \right. \right. \nonumber\\ & - & \left. c_0(m_{H^+},M_t,M_t) - \frac{1}{2} \right) v^{(t)}_R + \frac{M_t^2}{\mu_R^2}\; c_2(m_{H^+},M_t,M_t) v^{(t)}_L \nonumber\\ & + & \left. c_0(M_t,m_{H^+},m_{H^+}) \left( \frac{1}{2} - \sin^2\theta_W \right) \right] , \end{eqnarray} where $\mu_R$ is a renormalization scale, $v_L^{(t)} = 0.5 -2 \sin^2\theta_w/3$ and $v_R^{(t)} = - 2 \sin^2\theta_W/3$, and $b_1(a,b,c)$, $c_k(a,b,c)$ with $k = 0, 2, 6$ are the corresponding reduced Passarino--Veltman functions. Since $m_b^2 \ll M_Z^2, M_t^2$, they are well approximated by \begin{eqnarray} b_1(m_1,m_2,0) & = & \int_0^1 dx \; x \log\left( \frac{ m_1^2 x + m_2^2 (1 - x) }{ \mu_R^2 } \right) \nonumber\\ c_0(m_1,m_2,m_3) & = & \int_0^1 dx \; \left( \frac{ \tilde{\chi}(x) \log\left[\tilde{\chi}(x)\right] - \tilde{\chi}(x) - b(x) \log\left[b(x)\right] + b(x) }{a(x)} \right) \nonumber\\ c_2(m_1,m_2,m_3) & = & \int_0^1 dx \; \frac{ \log\left( \tilde{\chi}(x) \right) - \log\left( b(x) \right) }{ a(x) } \nonumber\\ c_6(m_1,m_2,m_3) & = & \int_0^1 dx \; x \frac{ \log\left( \tilde{\chi}(x) \right) - \log\left( b(x) \right) }{ a(x) }, \end{eqnarray} and the arguments $a(x)$ and $b(x)$ are given by \begin{eqnarray} a(x) & = & \frac{m_3^2 - m_1^2 - x M_Z^2}{\mu_R^2} \nonumber\\ b(x) & = & \frac{m_1^2 + x \left( m_2^2 - m_1^2 \right)} {\mu_R^2} , \end{eqnarray} while $\tilde{\chi}(x) = \chi(m_3^2,m_2^2,x)/\mu_R^2$ and $\chi(m_3^2,m_2^2,x)$ has been defined in Eq.(\ref{eq:chi}). The chargino contribution takes a somewhat more complicated expression. It is given by \cite{epsbs} \begin{equation} \epsilon_b^{\tilde{\chi}^+} = - \frac{ \alpha }{2 \pi \sin^2\theta_W} \left( F_b^{\tilde{\chi}^+}(M_t) - F_b^{\tilde{\chi}^+}(0) \right), \end{equation} where \begin{equation} F_b^{\tilde{\chi}^+}(M_t) = F_b^{\tilde{\chi}^+ (a)}(M_t) + F_b^{\tilde{\chi}^+ (b)}(M_t) + F_b^{\tilde{\chi}^+ (c)}(M_t), \end{equation} and \begin{eqnarray} F_b^{\tilde{\chi}^+ (a)}(M_t) & = & \sum_{i, j} b_1( m_{\tilde{t},j},M_i,m_b^2) \left| \Lambda_{j,i}^L \right|^2, \nonumber\\ F_b^{\tilde{\chi}^+ (b)}(M_t) & = & \sum_{i,j,k} c_0(M_k, m_{\tilde{t},i}, m_{\tilde{t},j}) \left( \frac{2}{3} \sin^2\theta_W \delta_{ij} - \frac{1}{2} T^*_{i1} T_{j1} \right) \Lambda_{i k}^L \Lambda^{* L}_{j k}, \nonumber\\ F_b^{\tilde{\chi}^+ (c)}(M_t) & = & \sum_{i,j,k} \left\{ \left[ \frac{M_Z^2}{\mu_R^2} c_6( m_{\tilde{t},k},M_i,M_j) - \frac{1}{2} - c_0( m_{\tilde{t},k},M_i,M_j) \right] U^R_{i j} \right. \nonumber\\ & + & \left. \frac{M_i M_j}{\mu_R^2} c_2( m_{\tilde{t},k},M_i,M_j ) U^L_{i j} \right\} \Lambda^L_{k i} \Lambda^{* L}_{k j}, \end{eqnarray} with \begin{equation} \Lambda^L_{i j} = T_{i 1} V^*_{j 1} - \frac{M_t}{\sqrt{2} M_W \sin\beta} T_{i2} V^*_{j2} \end{equation} and $T_{ij}$ ($V_{ij}$, $U_{ij}$) is the stop (chargino) mixing mass matrix (matrices) defined in Appendix A. Observe that both the parameter $\Lambda^L_{i j}$ and the squark mass parameters have a dependence on the top quark mass. Indeed, if the top quark mass were negligible, the squark mass parameters would acquire an approximately common value $m_{\tilde{t}}^2 \simeq m_0^2 + 7 M_{1/2}^2$. The function $F_b^{\tilde{\chi}^+}(0)$ becomes, hence, independent of the stop mixing matrix (which is formally equal to the identity in the limit $M_t = 0$). \newpage {}~\\ {\bf{FIGURE CAPTIONS}}\\ {}~\\ Fig. 1. Lightest stop mass as a function of the lightest chargino mass, for the case of universal soft supersymmetry breaking parameters at the grand unification scale and four different values of the physical top quark mass $M_t = 160,\;165,\;175$ and 185 GeV.\\ {}~\\ Fig. 2. The same as figure 1, but for the case I of non--universality for the scalar mass parameters at $M_{GUT}$ : $m_{H_1}^2(0) = 0$, $m_{H_2}^2(0) = 2 m_Q^2(0)$. \\ {}~\\ Fig. 3. The same as figure 1, but for the case II of non--universality for the scalar mass parameters at $M_{GUT}$ : $m_{H_1}^2(0) = 0$, $m_{H_2}^2(0) = 2 m_Q^2(0)$. \\ {}~\\ Fig. 4. Lightest CP--even Higgs mass as a function of the physical top quark mass, for the values of $\tan\beta$, which for each value of $M_t$ corresponds to the top quark mass infrared fixed point solution (crosses). Also shown in the figure is the upper bound on the Higgs mass as a function of the top quark mass for values of $\tan\beta \simeq 5$--10.\\ {}~\\ Fig. 5. Lightest CP--even Higgs mass as a function of the lightest chargino mass for the case of universal scalar mass parameters at $M_{GUT}$ and for the same values of the physical top quark mass $M_t = 165, 175$ and 185 GeV.\\ {}~\\ Fig. 6. The same as figure 5 but for the case I of non--universality of the soft supersymmetry breaking parameters at $M_{GUT}$.\\ {}~\\ Fig. 7. The same as figure 4 but for the case II of non--universality of the soft supersymmetry breaking parameters at $M_{GUT}$. \\ {}~\\ Fig. 8. Dependence of the precision data variable $\epsilon_1$ on the lightest chargino mass for the case of universal supersymmetry breaking scalar mass parameters at $M_{GUT}$ and for three different values of the top quark mass: $M_t = 165, 175$, 185 GeV. \\ {}~\\ Fig. 9. Dependence of the variables $\epsilon_1$, $\epsilon_b$ as a function of the lightest chargino mass and the ratio of the supersymmetric prediction for the branching ratio $BR(b \rightarrow s \gamma)$ to the standard model one, as a function of the supersymmetric mass parameter $\mu$, for the case I of non--universality of the soft supersymmetry breaking parameters at $M_{GUT}$ and a top quark mass $M_t = 175$ GeV. \\ {}~\\ Fig. 10. The same as Fig. 9 but for the case II of non--universality of the soft supersymmetry breaking parameters at $M_{GUT}$.\\ {}~\\ Fig. 11. Dependence of the variable $\epsilon_b$ on the lightest chargino mass for the case of universal scalar mass parameters at $M_{GUT}$ and for three different values of the top quark mass: $M_t = 165, 175$ and 185 GeV. \\ {}~\\ Fig. 12. Dependence of the ratio of the supersymmetric prediction for the branching ratio $BR(b \rightarrow s \gamma)$ to the Standard Model one, as a function of the supersymmetric mass parameter $\mu$ for the case of universal scalar mass parameters at $M_{GUT}$ and three different values of the top quark mass: $M_t = 165, 175$ and 185 GeV.\\ {}~\\ Fig. 13. Correlation between the variables $\epsilon_1$ and $\epsilon_b$ for the case of universality of the soft supersymmetry breaking parameters at $M_{GUT}$ and three different values of the top quark mass: $M_t = 165, 175$ and 185 GeV. \\ {}~\\ Fig. 14. The same as Fig. 13, but for cases I and II of non--universality of the scalar mass parameters at $M_{GUT}$ and a top quark mass $M_t = 175$ GeV.\\ {}~\\ Fig. 15. Correlation between the variables $\epsilon_b$ and the ratio of the supersymmetric prediction for the branching ratio $BR(b \rightarrow s \gamma)$ to the standard model one, for the case of universality of the soft supersymmetry breaking parameters at $M_{GUT}$ and three different values of the top quark mass: $M_t = 165, 175$ and 185 GeV.\\ {}~\\ Fig. 16. The same as Fig. 15, but for the cases I and II of non--universality of the scalar mass parameters at $M_{GUT}$ and a top quark mass $M_t = 175$ GeV.\\ \newpage
alg-geom/9408006
\section*{Introduction} In his preprint~\cite{Lan}, J.M.~Landsberg introduces an elementary characterization of complete intersections (Proposition~1.2 in \cite{Lan}). The proof of this proposition uses the method of moving frames. The aim of this note is to present an elementary proof of Landsberg's criterion that is valid over any ground field. \section{Notation and statement of results} Let $k$ be an algebraically closed field and ${\bf P}^N= \mathop{\rm Proj}\nolimits k[T_0,\ldots,T_N]$ the $N$-dimensional projective space over $k$. If $F$ is a homogeneous polynomial in $T_0,\ldots ,T_N$, we will denote by $Z(F)\subset {\bf P}^N$ the hypersurface defined by $F$. If $F$ is a homogeneous polynomial and $x=(x_0:\ldots:x_N)\in \PP^N$, put $d_x F=\left(\partial F/\partial T_0(z),\ldots, \partial F/\partial T_N(z)\right)\in k^{N+1}$ (actually $d_x F$ depends on the choice of homogeneous coordinates for $x$; this abuse of notation should not lead to confusion). If $x\in X$, where $X\subset\PP^N$ is a projective variety, then $T_xX\subset \PP^N$ denotes the embedded Zariski tangent space to $X$ at $x$. If $X\subset {\bf P}^N$ is a projective variety, then its ideal sheaf will be denoted by $\idsheaf X\subset \O_{{\bf P}^N}$ and its homogeneous ideal by $I_X\subset k[T_0, \ldots, T_N]$. We will say that a hypersurface $Y=Z(F)$ {\em trivially contains $X$\/} iff $F=\sum G_iF_i$, where $G_i$'s and $F_i$'s are homogeneous polynomials, $F_i$ vanish on $X$ for all $i$, and $\deg F_i<\deg F$ for all $i$. If $Y$ trivially contains $X$, then $Y\supset X$. We will say that a hypersurface $W$ {\em non-trivially contains $X$\/} iff $W$ contains $X$, but not trivially. The following proposition is a slight reformulation of Landsberg's criterion (cf.\ \cite[Proposition 1.2]{Lan}): \begin{prop} For a projective variety $X\subset {\bf P}^N$, the following conditions are equivalent: \begin{itemize} \item[(i)] $X$ is a complete intersection. \item[(ii)] There exists a smooth point $x\in X$ having the following property: any hypersurface $W\subset {\bf P}^N$ that non-trivially contains $X$ must be smooth at $x$. \item[(iii)] For any smooth point $x\in X$ and any hypersurface $W$ that non-trivially contains $X$, $W$ is smooth at $x$. \item[(iv)] For any smooth point $x\in X$ and any hypersurface $W$ that non-trivially contains $X$, $T_xW$ cannot contain an intersection $\bigcap_i T_xW_i$, where each $W_i$ is a hypersurface s.t.\ $W_i \supset X$ and $\deg W_i<\deg W$ (it is understood that the intersection of an empty family of tangent spaces is the entire $\PP^N$). \end{itemize} \end{prop} \section{Proofs} For the sequel we need two lemmas. \begin{lemma}\label{subst} Let $F_1,\ldots,F_r$ be homogeneous polynomials over $k$ in $T_0,\ldots, T_N$. Assume that $x=(x_0:\ldots:x_N)\in \PP^N$ is their common zero and that the vectors $d_xF_1,\ldots, d_xF_r$ are linearly dependent. Then one of the following alternatives holds: \begin{enumerate} \item There is $j\in [1;r]$ s.t.\ $F_j$ belongs to the ideal in $k[T_0,\ldots,T_N]$ generated by $F_i$'s with $i\ne j$. \item There are homogeneous polynomials $\tilde F_0,\ldots, \tilde F_N$ s.t.\ the ideals $(F_0,\ldots,F_N)$ and $(\tilde F_0,\ldots, \tilde F_N)$ coincide, $\deg \tilde F_i=\deg F_i$ for all $i$, and $d_x \tilde F_j=0$ for some $j$. \end{enumerate} \end{lemma} {\bf Proof.} Let the shortest linear relation among $d_xF_j$'s have the form $$ \lambda_1d_xF_1+\cdots+\lambda_sd_xF_s=0, $$ where $\lambda_j\ne 0$ for all $j$. Reordering $F_j$'s if necessary, we may assume that $\deg F_1\le \deg F_2\le \cdots\le \deg F_s$. Let $t$ be such a number that $\deg F_t=\deg F_s$ and $\deg F_{t-1} <\deg F_s$ (if $\deg F_1=\deg F_s$, set $t=1$). If the polynomials $F_t,\ldots,F_s$ are linearly dependent, then it is clear that one of them lies in the ideal generated by the others and there is nothing more to prove. Assume from now on that $F_t, F_{t+1},\ldots, F_s$ are linearly independent. Then there exists an index $j\in[t;s]$ and numbers $\mu_i$, where $i \in [t;s]$ s.t.\ \begin{equation}\label{G:def} F_j=\sum_{i\in [t;s]\setminus \{j\}}\mu_i F_i+ \mu_j(\lambda_t F_t+\cdots+\lambda_s F_s). \end{equation} For each $i\in[1;t-1]$, choose a homogeneous polynomial $G_i$ s.t.\ $\deg G_i= \deg F_s-\deg F_i$ and $G_i(x_0,\ldots, x_N)=\lambda_i$, and set \begin{equation}\label{tilde:def} \tilde F_j=\sum_{i<t}G_iF_i+\sum_{i\ge t}\lambda_i F_i. \end{equation} If $\tilde F_j=0$, then $F_s\in (F_1,\ldots, F_{s-1})$ and the first alternative holds. Otherwise, $\deg \tilde F_j=\deg F_j$, $d_x\tilde F_j=0$ by virtue of (\ref{tilde:def}), and it follows from (\ref{G:def}) and (\ref{tilde:def}) that $$ F_j=\sum_{i\in [t;s]\setminus \{j\}}\mu_i F_i +\mu_j \tilde F_j -\mu_j\sum_{i<t}G_i F_i, $$ whence $(F_1,\ldots,F_{j-1}, \tilde F_j, F_{j+1},\ldots, F_s)=(F_1,\ldots,F_s)$. Hence in this case the second alternative holds, and we are done. The second lemma belongs to folklore. To state this lemma, let us introduce some notation. Denote by $\S$ the set of sequences of non-negative integers $\delta=(\delta_1,\delta_2,\ldots)$ s.t.\ $\delta_M=0$ for all $M\gg 0$. If $\delta,\eta\in \S$, we will write~$\delta \succ \eta$ iff there is an integer $i$ s.t.\ $\delta_i >\eta_i$ and $\delta_j=\eta_j$ for all $j>i$. \begin{lemma}\label{folk} Any sequence $\delta_1 \succ \delta_2 \succ\cdots$ must terminate. \end{lemma} \noindent {\bf Proof.} For any $\delta\in \S$, set $n(\delta)=\max\{j:\delta_j\ne 0\}$, $\ell(\delta)=\delta_{n(\delta)}> 0$. If $\delta\succ \eta$ and $n(\delta)=n(\eta)$, then $\ell(\delta)\ge \ell(\eta)$. Let us prove the lemma by induction on $n(\delta_1)$. If $n(\delta_1)\le 1$, the result is evident. Assuming that the lemma is true whenever $n(\delta_1)< m$, suppose that there is an infinite sequence $\delta_1 \succ \delta_2 \succ\cdots$ with $n(\delta_1)=m$. If $n(\delta_j)<n(\delta_1)$ for some $j$, we arrive at a contradiction by the induction hypothesis. Hence, $n(\delta_j) =n(\delta_1) =m$ for all $j$ and $\ell(\delta_1)\ge \ell(\delta_2)\ge\cdots >0$. Thus there exists an integer $N$ s.t\ $\ell(\delta_j)$ is connstant for $j\ge N$. For any $j\ge N$, denote by $\delta'_j\in \S$ a sequence that is obtained from $\delta_j$ by replacing its last positive term by zero. It is clear that $\delta'_N \succ \delta'_{N+1} \succ\cdots$, and this sequence is infinite by our assumption. This is again impossible by the induction hypothesis since $n(\delta'_j)< n(\delta_j)=m$, whence the lemma. \smallskip \par\addvspace{\smallskipamount of $(ii)\Rightarrow (i)$. Put $a=N-\dim X$. Let $(F_1,\ldots,F_r)$ be a system of (homogeneous) generators of $I_X$. To any such system assign a sequence $\delta(F_1,\ldots, F_r) \in \S$, where $\delta(F_1, \ldots,F_r)_i = \#\{j\in [1;r]:\deg F_j=i\}$. I claim that \begin{quote} if $r>a$, then $I_X=(\Phi_1,\ldots,\Phi_s)$, where $\Phi_i$'s are homogeneous polynomials s.t.\ $\delta(F_1,\ldots,F_r)\succ \delta(\Phi_1,\ldots, \Phi_s)$. \end{quote} To prove this claim, observe that $d_xF_1,\ldots,d_xF_r$ are linearly dependent since $X$ is smooth at $x$ and $r>\mathop{\rm codim}\nolimits X$. Now Lemma~\ref{subst} implies that either one of the $F_j$'s (say, $F_1$) can be removed without affecting $I_X$, or $I_X=(\tilde F_1,\ldots,\tilde F_r)$, where $\deg \tilde F_j=\deg F_j$ for all $j$ and $d_x \tilde F_j=0$ for some $j$. In the first case, the required $\Phi_1,\ldots,\Phi_s$ can be obtained by merely removing $F_1$; in the second case, hypothesis~$(ii)$ shows that $\tilde F_j=\sum_{i=1}^t G_i\Psi_i$, where $\Psi_i\in I_X$ and $\deg\Psi_i < \deg \tilde F_j$ for all $j$. Replacing $\tilde F_j$ by $\Psi_1,\ldots, \Psi_t$ in the sequence $\tilde F_1,\ldots,\tilde F_r$ and putting $s=r+t-1$, we obtain a sequence $\Phi_1,\ldots,\Phi_s$ s.t.\ $I_X =(\Phi_1,\ldots, \Phi_s)$ and $\delta(\tilde F_1,\ldots,\tilde F_r)\succ \delta(\Phi_1,\ldots, \Phi_s)$. Since the degrees of $\tilde F_j$'s and $F_j$'s are the same, this means that $\delta(F_1, \ldots,F_r)\succ \delta(\Phi_1,\ldots, \Phi_s)$ as well, and the claim is proved. Now we can finish the proof as follows. If $r=a$, then $X$ is the complete intersection of $Z(F_1),\ldots,Z(F_r)$ and there is nothing to prove. If $r>a$, then by virtue of our claim we can replace the system of generators $F_1,\ldots,F_r$ by $\Phi_1,\ldots, \Phi_s$. Let us iterate this process. By virtue of Lemma~\ref{folk} this process must terminate and by virtue of our claim this is possible only when we have found a system of exactly $a$ generators of the ideal $I_X$. This means that $X$ is a complete intersection, thus completing our proof. \smallskip \par\addvspace{\smallskipamount of $(iv)\Rightarrow (iii)\Rightarrow (ii)$. Trivial. \par\addvspace{\smallskipamount of $(i)\Rightarrow (iv)$. Let $X$ be a complete intersection of the hypersurfaces $Z(F_1), \ldots, Z(F_a)$. Assume that a hypersurface $W=Z(F)$, with $F$ irreducible, non-trivially contains $X$ and that $x=(x_0:\ldots :x_N)\in \PP^N$ is a smooth point of $X$; set $m=\deg F$. Since $Z(F)\supset X$ and $X$ is a complete intersection of the $Z(F_i)$'s, we see that \begin{equation}\label{expr} F=\sum G_iF_i; \end{equation} since $W$ contains $X$ non-trivially, at least some of the $G_j$'s must be non-zero constants. Reordering $F_j$'s if necessary, we may assume that $G_j$ is a constant (hence, $\deg F_j=m$) iff $1\le j\le s$. Taking $d_x$ of the both parts of (\ref{expr}), we see that \begin{equation}\label{diffls} d_xF=\sum_{i=1}^a c_i d_x F_i,\qquad \mbox{where $c_i\ne 0$ for some $i\in [1;s]$.} \end{equation} On the other hand, assume that $W_i=Z(B_i)$ with irreducible $B_i$'s. Then the hypothesis implies that $d_x F$ is a linear combination of $d_xB_j$'s, and the fact that $X$ is a complete intersection of $Z(F_t)$'s and $Z(B_j)\supset X$ implies that, for each $j$, there is a relation \begin{equation}\label{expr'} B_j=\sum_{t>s} G_{jt}F_t \end{equation} (it suffices to sum only over $t>s$ since for $t\le s$ we have $\deg F_t= \deg W > \deg B_j$). If we take $d_x$ of both parts of (\ref{expr'}), we see that, for each $j$, $d_xB_j$ is a linear combination of $d_x F_t$'s with $t>s$. Hence $d_xF$ is also a linear combination of $d_x F_t$'s with $t>s$. Taking into account (\ref{diffls}) we see that $d_xF_i$'s are linearly dependent. This is, however, impossible since $x$ is a smooth point of the comlete intersection of $Z(F_j)$'s. This contradiction completes the proof.
1611.06234
\section{Introduction} Compressible flows around blunt objects play an important role in diverse fields of science and engineering, ranging from fluid mechanics \citep[\emph{e.g.,} ][]{LandauLifshitz59_FluidMechanics, ParkEtAl2006,MackSchmid2011,TuttyEtAl2013,GrandemangeEtAl2013,GrandemangeEtAl2014}, space physics \citep[\emph{e.g.,} ][]{SpreiterAlksne70, BaranovLebedev88, SpreiterStahara95, CairnsGrabbe94, PetrinecRussell97, Petrinec02}, and astrophysics \citep{LeaYoung76, ShavivSalpeter82, CantoRaga98, SchulreichBreitschwerdt11}, to computational physics and applied mathematics \citep{Hejranfaretal09, Wilson13, GollanJacobs13, Marroneetal13}, aeronautical and civil engineering \citep{NAKANISHIKAMEMOTO93, Baker10, Aulchenkoetal12}, and aerodynamics \citep{AsanalievEtAl88,LiouTakayama05,PilyuginKhlebnikov06,Volkov09}. Yet, even for the simple case of an inviscid flow around a sphere, the problem has resisted a general or accurate analytic treatment due to its nonlinear nature. In particular, in space physics and astrophysics, the interaction of an ambient medium with much denser, in comparison approximately solid, bodies such as comets \citep[\emph{e.g.,} ][]{BaranovLebedev88}, planets \citep{SpreiterAlksne70,CairnsGrabbe94,PetrinecRussell97}, binary companions \citep{CantoRaga98}, galaxies \citep{ShavivSalpeter82,SchulreichBreitschwerdt11}, or large scale clumps and bubbles \citep{LeaYoung76, Vikhlininetal01, Lyutikov06, MarkevitchVikhlinin07}, is important for modeling these systems and understanding their observational signature. This is particularly true for the shocks formed in supersonic flows, due to their rich nonthermal effects \citep[\emph{e.g.,} ][]{SpreiterStahara95, Vikhlininetal01, Petrinec02, MarkevitchVikhlinin07}. Although these fairly complicated systems can be approximately solved numerically, they are often modeled as an idealized, inviscid flow around a simple blunt object, often approximated as axisymmetric or even spherical, with some simplified analytic description employed in order to gain a deeper, more general understanding of the system. Consequently, this fundamental problem of fluid mechanics has received considerable attention. The small Mach number $M$ regime was studied as an asymptotic series about $M=0$ \citep{LordRayleigh1916, tamada39, kaplan40,StangebyAllen71,Allen13}, and solved in the incompressible potential flow limit. Some hodograph plane results and series approximations were found in the transonic and supersonic cases \citep{Hida1955asymptotic, LiepmannRoshko57, Guderley_TransonicFlow}. In particular, approximations for the standoff distance of the bow shock \citep[\emph{e.g.,} ][]{Moeckel49, Hida53, Lighthill57, HayesProbstein66, SpreiterEtAl66, Guy74, CoronaRomero13} partly agree with experiments \citep[\emph{e.g.,} ][]{Heberle_etal50, SchwartzEckerman56}, spacecraft data \citep{FarrisRussell94, SpreiterStahara95, Veriginetal99}, and numerical computations \citep{ChapmanCairns03, IgraFalcovitz10}. However, these analytic results are typically based on ad hoc, unjustified assumptions, such as negligible compressibility effects, a predetermined shock geometry \citep{Lighthill57, Guy74}, or an incompressible \citep{Hida53} or irrotational \citep{kawamura1950mem, Hida1955asymptotic} flow downstream of the shock. Other approaches use slowly converging, or impractically complicated, expansion series \citep{LordRayleigh1916, Hida1955asymptotic, vanDyke58a, vanDyke75Book}. In all cases, the results are inaccurate or limited to a narrow parameter regime. A generic yet accurate analytic approach is needed. We adopt the conventional assumptions of \emph{(i)}\, an ideal, polytropic gas with an adiabatic index $\myGamma$; \emph{(ii)}\, negligible viscosity and heat conduction (ideal fluid); \emph{(iii)}\, a steady, laminar, non-relativistic flow; and \emph{(iv)}\, negligible electromagnetic fields. Typically, these assumptions hold in front of the object, but break down behind it and in its close vicinity. We thus analyze the flow ahead of the object. While spatial series expansions and hodograph plane analyses, when employed separately, are of limited use \citep[for reviews, see][]{vanDyke58b,vanDyke75Book}, we find that their combination gives good results over the full parameter range. In particular, we expand the axial flow in terms of the parallel velocity, rather than of distance. This yields an accurate, \fixapj{fully} analytic description of the \fixapj{gas-dynamic} flow, in both subsonic and supersonic regimes, already in a second or third order expansion, as shown in Fig. \ref{Fig:AllFlows}. After introducing the flow equations in \S\ref{sec:FlowEquations}, in particular along the axis of symmetry, we derive the expansion series for the subsonic regime in \S\ref{sec:SubsonicFlow}, and for the supersonic regime in \S\ref{sec:SupersonicFlow}. Some astrophysical implications are demonstrated in \S\ref{sec:Astro}, in particular for planetary bow shocks and for clumps and bubbles in the intergalactic medium (IGM). We begin the analysis with a sphere, and \fixapj{outline the generalization} for arbitrary blunt axisymmetric objects in \S\ref{sec:Discussion}, where the results are summarized and discussed. For convenience, the full results are given explicitly in Appendix \S\ref{sec:Explicit}. \section{Flow equations} \label{sec:FlowEquations} Under the above assumptions, the flow is governed by the stationary continuity, Euler, and energy equations, \begin{equation} \label{eq:FlowEquations} \bm{\nabla} \cdot (\rho\vect{v})=0 \, ; \,\,\,\, (\vect{v}\cdot \bm{\nabla})\vect{v}=-\frac{\bm{\nabla} P}{\rho} \, ; \,\,\, \, \vect{v}\cdot \bm{\nabla} \left( \frac{P}{\rho^\myGamma} \right)=0\, , \end{equation} where $\vect{v}$, $P$ and $\rho$ are the velocity, pressure and mass density. At a shock, downstream (subscript $d$) and upstream ($u$) quantities are related by the shock adiabat \citep[\emph{e.g.,} ][]{LandauLifshitz59_FluidMechanics}, \begin{equation} \label{eq:JumpConditions} \!\frac{\rho_d}{\rho_u} = \frac{v_u}{v_d} = \frac{(\myGamma+1) M_u^2}{(\myGamma-1)M_u^2+2 } \, ; \,\,\, \, \frac{P_d}{P_u} = \frac{2\myGamma M_u^2+1-\myGamma}{\myGamma+1}, \end{equation} with $M\equiv v/\mycs$, and $\mycs=(\gamma P/\rho)^{1/2}$ being the sound speed. Along streamlines, Bernoulli's equation implies that \begin{equation} \label{eq:Bernoulli} w + v^2/2 = \mybar{w} = \mbox{const.} \mbox{ ,} \end{equation} where $w=\myGamma P/[(\myGamma-1)\rho]$ is the enthalpy, and a bar denotes (henceforth) a putative stagnation ($v=0$) point. The far incident flow is assumed to be uniform and unidirectional, so $\mybar{w}$ is the same constant for all streamlines. Equation~(\ref{eq:Bernoulli}) remains valid across shocks, as $w+v^2/2$ is the ratio between the normal fluxes of energy and of mass, each conserved separately across a shock. Bernoulli's equation (\ref{eq:Bernoulli}) relates the local Mach number, \begin{equation} \label{eq:M0andPi} M = v/c = \left(\mystag{M}^{-2}-\myS^{-2}\right)^{-\frac{1}{2}} = ( \Pi^{-\frac{\myGamma-1}{\myGamma}}-1 )^{\frac{1}{2}} \myS \mbox{ ,} \end{equation} to the Mach number with respect to stagnation sound, $\mystag{M} \equiv v/\mycsStag$, and to the normalized pressure, $\Pi\equiv P/\mybar{P}$. We define $\myS^2\equiv 2/(\myGamma-1)$ and $\myW^2\equiv 2/(\myGamma+1)$ as the strong and weak shock limits of $\mystag{M}^2$, so the subsonic (supersonic) regime becomes $0<\mystag{M}<\myW$ ($\myW<\mystag{M}<\myS$). Figure~\ref{Fig:AllFlows} illustrates these definitions, and shows the shock adiabat Eq.~(\ref{eq:JumpConditions}) (as horizontal jumps at fixed $r$) for $\gamma=7/5$. \begin{figure*} \centerline{\hspace{3.5cm}\epsfxsize=23cm \epsfbox{\myfig{FullAxis2.eps}}} \caption{ Radial profiles of Mach number $M$ (top axis) and of normalized velocity $M_0$ and pressure $\Pi$ (bottom axis; \fixapj{see definitions in Eq.~\ref{eq:M0andPi}}) in front of a unit ($r=1$) sphere, for $\myGamma=7/5$, according to numerical simulations (symbols) and our approximation (curves), in both subsonic (bluish circles and dot-dashed curves) and supersonic (reddish squares and dashed curves) regimes. Numerical data shown (alternating shading to guide the eye) for $\myttilde{M}=0.6$, $0.7$, $0.8$, $0.95$ \citep{Karanjkar08}, $1.1$, $1.3$, $1.62$ \citep{Krause75, Heberle_etal50}, $3$ \citep{BonoAwruch08}, and $5$ \citep{Krause75, SedneyKahl61}. The shock standoff distance (solid green) with its $\myttilde{M}\to\infty$ limit (triangle) are also shown. The right side of the figure extends it (on a different scale, to show the full $M$ range) to the supersonic, $M>1$ part of the flow, upstream of shocks; horizontal jumps represent the shock adiabat Eq.~(\ref{eq:JumpConditions}). \emph{Inset}: standoff distance measured experimentally (symbols) and using the parameter-free (dotted curves; Eq.~(\ref{eq:xiSeries})) and single-parameter fit (Eq.~(\ref{eq:xiFit})) approximations, for $\myGamma=7/5$ (triangles; \citet{Heberle_etal50,vanDyke58b, SedneyKahl61, Krause75}; solid curve for $\beta=0.48$) and $\myGamma=5/3$ (diamonds; \citet{SchwartzEckerman56}; dashed curve for $\beta=0.52$). \label{Fig:AllFlows} } \end{figure*} Consider the flow ahead of a sphere along the symmetry axis, $\theta=0$ in spherical coordinates $\{r,\theta,\phi\}$. Here, the flow monotonically slows with decreasing $r$, down to $v=0$ at the stagnation point, which we normalize as $\mybar{\vect{r}}=\{1,0,0\}$. Symmetry implies that along the axis $\vect{v}=-u(r)\unit{r}$, where $u>0$. Here, Eqs.~(\ref{eq:FlowEquations}) become \begin{equation} \label{eq:AxisEquations} \frac{\partial\ln (\rho u)}{\partial\ln r^2}=\frac{q-u}{u} \, ; \quad \partial_r P=-\rho u\partial_r u \, ; \quad \partial_\theta P=0 \mbox{ ,} \end{equation} along with Bernoulli's Eq.~(\ref{eq:Bernoulli}), where we defined $q\equiv (\partial_\theta v_\theta)_{\theta=0}$ as a measure of the perpendicular velocity. Hence, \begin{align} \label{eq:uODE} \partial_r u & = \frac{2}{r}(q-u) \frac{1-\mystag{M}^2/\myS^2}{1-\mystag{M}^2/\myW^2} \mbox{ .} \end{align} Our analysis relies on $u(r)$ being a monotonic function. This allows us to write $q=q(u)$ as a function of $u$ and not of $r$. Integrating Eq.~(\ref{eq:uODE}) thus yields \begin{equation} \label{eq:uSolution} 2\ln r = \int_0^{u(r)} \frac{1-\mystag{M}(u')^2/\myW^2}{1-\mystag{M}(u')^2/\myS^2} \,\, \frac{du'}{q(u')-u'} \, , \end{equation} so given $q(u)$, the near-axis flow is directly determined. Unlike $u(r)$, or other expansion parameters used previously, the $q(u)$ profile for typical bodies varies little, and nowhere vanishes. It is well approximated by a few terms in a power expansion of the form \begin{equation} \label{eq:qExpansion} q(u) = q_0 + q_1 (u-U) + q_2(u-U)^2 + \ldots \mbox{ ,} \end{equation} where $U$ is a reference velocity, so the integral in Eq.~(\ref{eq:uSolution}) can be analytically carried out to any order (see \S\ref{sec:Explicit}). Moreover, we next show that the boundary conditions tightly fix $q(u)$, giving a good approximation for the near axial flow. First expand $q\simeq \mybar{q}$ near stagnation, with $U=\mybar{u}=0$. An initially homogeneous subsonic or even mildly supersonic \citep{LandauLifshitz59_FluidMechanics} flow remains irrotational, $\bm{\nabla}\times \vect{v}=0$, in which case the lowest-order constraint is \begin{equation} \label{eq:ConstPotentialFlow} \mybar{q}_1 = -1/2 \mbox{ ,} \end{equation} whereas for a supersonic, rotational flow, it becomes \begin{equation} \label{eq:ConstGeneralFlow} 3\mycsStag^2\mybar{q}_3+7\mycsStag\mybar{q}_2 = 2\mybar{q}_1 + 6\frac{\mybar{q}_0}{\mycsStag} + \mybar{q}_1\left(\frac{\mybar{q}_0}{\mycsStag}\right)^2 + \left(\frac{\mybar{q}_0}{\mycsStag}\right)^3 \mbox{ ,} \end{equation} as seen by expanding Eqs.~(\ref{eq:FlowEquations}) to order $\theta^2(r-1)^3$. The generalization for non-spherical objects is discussed in \S\ref{sec:Discussion}. Next, we estimate $q$ far from the body, and use it to approximate the flow in both the subsonic (\S\ref{sec:SubsonicFlow}) and supersonic (\S\ref{sec:SupersonicFlow}) regimes. \section{Subsonic flow} \label{sec:SubsonicFlow} In the subsonic, $\tilde{M}<1$ case, we derive the incoming axial flow out to $r\to\infty$. Using the incident flow (labeled by a tilde, henceforth) boundary condition $\mytilde{\vect{v}}=\mytilde{u}\{-\cos\theta,\sin\theta,0\}$, we may expand $\myttilde{q}$ with $U=\mytilde{u}$, such that \begin{equation} \label{eq:ConstSubsonicQ0} \myttilde{q}_0= (\partial_\theta \mytilde{v}_\theta)_{\theta=0} =\mytilde{u} \mbox{ .} \end{equation} Additional terms can be derived using $\mytilde{M}\ll 1$ or $r\gg 1$ expansions appropriate for the relevant object. Here, it will suffice to consider the leading, $(u-\mytilde{u})\propto r^{-\alpha}$ behavior at large radii, such that Eq.~(\ref{eq:uODE}) yields \begin{equation} \label{eq:ConstSubsonicQ1} \myttilde{q}_1 = 1 - \frac{\alpha}{2}\, \frac{1-\myttilde{M}_0^2/\myW^2}{1-\myttilde{M}_0^2/\myS^2} \mbox{ .} \end{equation} In the incompressible limit, $\alpha=3$ for any object \citep[\emph{e.g.,} ][]{LandauLifshitz59_FluidMechanics}. This also holds for general forward-backward symmetric objects in any potential flow. To see the latter, expand the potential $\Phi$, defined by $\vect{v}=\tilde{u}\bm{\nabla}\Phi$, as a power series in $r$. Imposing the $r\to\infty$ boundary conditions and regularity across $\theta=0$ yields \begin{align} \Phi = -r\cos\theta+\frac{\varphi_1}{r \Theta}+\frac{\varphi_2\cos\theta}{r^2 \Theta^3} + \ldots \mbox{ ,} \end{align} where $\Theta\equiv [1-M^2(\myS^{-2}+\sin^2\theta)]^{1/2}$. The constants $\varphi_k$ are determined by the boundary conditions on the specific body. Symmetry under forward-backward inversion, $\Phi\to-\Phi$ if $\theta\to \pi-\theta$, requires that $\varphi_1=0$. In general $\varphi_2\neq 0$, implying that indeed $\alpha=3$. Such behavior is demonstrated for an arbitrary compressible flow around a sphere by the Janzen-Rayleigh series \citep[\emph{e.g.,} ][]{tamada39,kaplan40}. Finally, the $\myttilde{q}$ expansion at $r\to\infty$ is matched to the $\mybar{q}$ expansion at stagnation for a potential flow. In the limit of an incompressible flow around a sphere, Eqs.~(\ref{eq:ConstPotentialFlow}), (\ref{eq:ConstSubsonicQ0}), and (\ref{eq:ConstSubsonicQ1}) yield $q(u)=\tilde{u}-(u-\tilde{u})/2+O(u-\tilde{u})^2=3\tilde{u}/2-u/2$, which is indeed the exact solution. This procedure reasonably approximates arbitrary compressible, subsonic flows. Better results are obtained by noting that the constraint (\ref{eq:ConstPotentialFlow}) holds also before stagnation, as long as $\partial_{\theta\theta}v_r$ is negligible, implying that $\mybar{q}_2\simeq 0$. Combining this with constraints~(\ref{eq:ConstPotentialFlow}), (\ref{eq:ConstSubsonicQ0}), and (\ref{eq:ConstSubsonicQ1}) yields an accurate, third order approximation, shown in Fig.~\ref{Fig:AllFlows} as dot-dashed curves. See \S\ref{sec:ExplicitSubsonic} for an explicit solution. \section{Supersonic flow} \label{sec:SupersonicFlow} In the supersonic, $\tilde{M}>1$ case, a detached bow shock forms in front of the object, at the so-called standoff distance $\mySO$ from its nose. The transition between subsonic and supersonic regimes is continuous, so $\mySO\to\infty$ as $\myttilde{M}\to 1$, or equivalently as $\myttilde{M}_0\to \myW$. The unperturbed upstream flow and the shock transition are shown on the right side of Fig.~\ref{Fig:AllFlows}. Consider the flow between the shock and stagnation along the axis of symmetry. The $q(u)$ profile is strongly constrained if the normalized shock curvature $\xi^{-1}\equiv (R/r_s)_{\theta=0}$ is known. Here, $r_s$ is the shock radius, such that $r_s(\theta=0)=1+\mySO$, and $R=r_s/[1-r_s''(\theta)/r_s]$ is its local radius of curvature. Expanding the flow Eqs.~(\ref{eq:FlowEquations}) using Eqs.~(\ref{eq:JumpConditions}) as boundary conditions, yields the $q^{(d)}$ expansion coefficients around $U=u_d$, just downstream of the shock, \begin{equation} \label{eq:q0d} q_{0}^{(d)} = \left(1+\myg \xi-\xi\right) \myg^{-1}\tilde{u} \, ; \end{equation} \begin{equation} \label{eq:q1d} q_{1}^{(d)} = \frac{3+(\myg-3)\xi}{2}-\frac{1+(3\myg-1)\xi}{1+\myg+(\myg-1)\myGamma} \, ; \end{equation} and \begin{flalign} \label{eq:q2d} q_{2}^{(d)} = & \frac{\myg\xi \myW^2}{8 \left(\myg+\myW^2-1\right)^2\myttilde{u}} \Big[\frac{ \myg^2-4 \myg+3}{\xi\myW^2} -\frac{ 2 (3\myg+1)}{\xi} \\ &+ 2 \left(\myg^2+4 \myg+1\right) -\frac{(\myg-1)^2 (\myg+3)}{\myW^2}+\frac{8 \myg^2 \myW^2}{\myg-1} \Big] \mbox{ ,} \nonumber \end{flalign} where $\myg\equiv (\myttilde{M}_0/\myW)^2\geq 1$ is the axial compression ratio. These coefficients depend on the shock profile only through $\xi$; higher order terms are sensitive to deviations of the profile from a sphere of radius $R$. In the weak shock limit $\myg\to 1$, so $\xi$ must vanish to avoid the divergence of $q_{2}^{(d)}$. This implies that $R$ diverges faster than $\Delta$, and $q_{1}^{(d)}\to (1-2\xi)$ asymptotes to unity, consistent with a smooth transition to the subsonic regime. Moreover, if we require that $q_2^{(d)}\to \tilde{q}_2 \to 3/(2\bar{c}W)$ in \fixapj{this} limit (see \S\ref{sec:ExplicitSubsonic}), then \begin{equation} \label{eq:TransonicXi} \xi(\myttilde{M}_0\simeq\myW) \to (4+\myGamma)(-1+\myttilde{M}_0/\myW) \mbox{ ,} \end{equation} so $R/r_s$ diverges as $(\myttilde{M}_0-\myW)^{-1}$, consistent with \citet[][as expected in the irrotational limit]{Hida53, Hida1955asymptotic}. Equations~(\ref{eq:q0d})--(\ref{eq:q2d}) yield a good, second order approximation to the flow, as shown in Fig.~\ref{Fig:AllFlows} (dashed curves), once $\xi$ or any of the $q^{(d)}$ coefficients are determined. This can be done using the stagnation boundary conditions, such as Eq.~(\ref{eq:ConstGeneralFlow}), but is laborious and body-specific due to the high order involved. A simpler approach is to estimate $\xi(M)$ using the weak and strong shock limits. In the strong shock, $\myttilde{M}_0\to\myS$ limit, the curvature of the shock approaches that of the object \citep[\emph{e.g.,} ][]{Guy74}; $\xi\to 1$ in the case of the sphere. The $\myttilde{M}_0(\xi)$ relation may be derived as a power series, using this and the constraint Eq.~(\ref{eq:TransonicXi}). A second order expansion in $\xi$ gives a good approximation, valid throughout the supersonic regime, \begin{equation} \label{eq:xiSeries} \frac{\myttilde{M}_0}{\myW}-1 \simeq \frac{\xi}{4+\myGamma} + \left(\frac{S}{W}-\frac{5+\myGamma}{4+\myGamma}\right) \xi^2 \mbox{ ,} \end{equation} with no free parameters. The good fit suggests that higher order terms in $\xi$ are negligible or absent. Alternatively, the result $\xi(\myttilde{M}_0\to S)=1$ and direct measurements of $\xi$ \citep{Heberle_etal50}, motivate a power-law approximation of the form \begin{equation} \label{eq:xiFit} \xi \simeq \left[(\myttilde{M}_0-\myW)/(\myS-\myW)\right]^\beta \mbox{ .} \end{equation} We find that Eq.~(\ref{eq:xiFit}) nicely fits the measured flow for Mach numbers not too small, with $\beta\simeq 1/2$. The standoff distance may now be found by solving Eq.~(\ref{eq:uSolution}) for $r_s=1+\Delta$, taking $u=u_d$ or equivalently $M_0=\myttilde{M}_0/g=\myW^2/\myttilde{M}_0$, using the expansion (\ref{eq:qExpansion}) with coefficients (\ref{eq:q0d}--\ref{eq:q2d}) fixed by the $\xi(\myttilde{M}_0)$ relation. The figure inset shows that Eq.~(\ref{eq:xiSeries}) provides a good fit to the standoff distance throughout the supersonic range, for two equations of state. It also shows that a single $\beta\simeq 1/2$ power-law in Eq.~(\ref{eq:xiFit}) reproduces $\Delta$ away from the transonic regime. Indeed, $\Delta$ is sensitive to the precise value of $\beta$ only in the $M\simeq 1$ limit; best results are obtained with $\beta=0.48$ ($\beta=0.52$) for $\gamma=7/5$ ($\gamma=5/3$). \section{Astrophysical implications} \label{sec:Astro} The above prescription for the flow in front of a blunt object is useful in a wide range of astrophysical circumstances, as the low-density medium can often be approximated as ideal and inviscid, the body as impenetrable, and the motion as steady and non-relativistic. Consider for example the standoff distance $\Delta$ in front of a supersonic astronomical object. It is useful to plot $\Delta$ as a function of the compression ratio $g$, rather than of the Mach number, because it is typically easier to measure $g$. As Fig. \ref{Fig:Planets} shows, $\Delta(g)$ at a given $\gamma$ approximately follows a power-law, for example $\Delta(\gamma=7/5)\simeq 1.6g^{-1.5}$ and $\Delta(\gamma=5/3)\simeq 1.5g^{-1.6}$. For high $\tilde{M}$, the standoff distance approaches the strong shock limit, approximately given by \fixapj{(see \S\ref{sec:subsecSO})} \begin{equation} \Delta(\tilde{M}\to\infty) \simeq \frac{2}{3g} \mbox{ .} \end{equation} For an arbitrary axisymmetric blunt body, the above results for a sphere are trivially generalized, if $\Delta$ is defined as the distance from the nose of the body, normalized by its radius of curvature (for additional corrections, see \S\ref{sec:Discussion}). One may thus superimpose $\Delta(g)$ estimates of astronomical bow shocks on Fig. \ref{Fig:Planets}, even for non-spherical bodies. Consider for example the bow shock of a planet, moving supersonically through the solar wind. Although the magnetic Mach number and the ratio $\Delta/\lambda_i$ ($\lambda_i$ being the ion gyroradius) are not very high in such systems, a gas-dynamic approach remains useful as a first approximation, provided that $\tilde{M}$ is replaced by the fast magnetosonic Mach number upstream \citep{Stahara1984, SpreiterStahara95, FairfieldEtAl01}. Here, we define $\Delta$ as the distance between the bow shock and the nose of the obstacle, namely the planetary magnetosphere or ionosphere, normalized by the radius of curvature of this obstacle's nose. For a discussion of planetary bow shocks, and a compilation of $\Delta$ estimates based on analytic arguments and numerical simulations, see \citet[][and references therein]{VeriginEtAl03}. Note that our analysis directly provides not only $\Delta(g)$, but also the flow profile and the shock radius of curvature. Estimates of $\Delta(g)$ for the solar system planets are shown in Fig. \ref{Fig:Planets}, with references provided in the caption. Interestingly, some planetary data seem to suggest a soft equation of state with $\gamma<5/3$. However, such an interpretation is hindered by the substantial simplifying assumptions, in particular the neglected MHD effects, kinetic effects, variable solar wind conditions, and non-axisymmetric corrections to the obstacles. The positions and shapes of the obstacles are in some cases highly uncertain; indeed, the results suggest a significant flattening of the magnetospheres of Saturn and Uranus. \begin{figure} \centerline{\epsfxsize=8.5cm \epsfbox{\myfig{Planets1.eps}}} \caption{ Bow shock standoff distance $\Delta$, measured from the nose of the obstacle and normalized by the nose curvature of the obstacle, plotted against the compression ratio $\myg$. Analytic curves for $\gamma=7/5,\, 5/3$ and $2$ (solid, thin to thick) are shown alongside planet data (labeled symbols), and approximated as power laws (dot-dashed curves, see text). Also plotted is the strong shock limit for various $\gamma$ (dashed), well fit by $\Delta\sim2/(3\myg)$ (dotted). Planetary $g$ values are based on magnetic (triangle or no symbol) or density (square) compression, shown with $1\sigma$ error bars (when available) for Mercury \citep[slightly perpendicular shock; measured at $\theta_p\simeq 70^\circ$;][]{AndersonEtAl08, TreumannJaroschek08, SlavinEtAl12}, Venus \citep[day side;][]{TreumannJaroschek08, FrankEtAl91}, Earth \citep[quasi-perpendicular; $\theta_p\lesssim45^\circ$;][]{CzaykowskaEtAl00}, Mars \citep[quasi-perpendicular; $\theta_p\lesssim90^\circ$;][]{TreumannJaroschek08}, Jupiter \citep[$\theta_p\simeq 20^\circ$;][]{Gloeckler04}, Saturn \citep[quasi-perpendicular; interval for different crossings at high angles $60^\circ\lesssim\theta_p\lesssim100^\circ$;][figure 9]{AchilleosEtAl06}, Uranus \citep[$\theta_p\simeq 25^\circ$;][]{BagenalEtAl87}, and Neptune \citep[quasi-perpendicular; $\theta_p\simeq 14^\circ$;][]{NessEtAl89, TreumannJaroschek08}. Standoff distances and obstacle curvatures are based on the data-constrained models of \citet[][for Venus, Earth and Mars]{Stahara1984} and \citet[][for the other planets]{SpreiterStahara95}. Systematic errors on $\Delta$ are large, especially for the external and rarely visited planets; in particular, the $\Delta$ estimate for Uranus (dashed) is inconclusive \citep{SpreiterStahara95}. For details\fixapj{, assumptions and limitations}, see text. \label{Fig:Planets}} \end{figure} As another astronomical system, consider the large scale extreme, namely the IGM of a galaxy group or cluster. Here, hot bubbles inflated by the active galactic nucleus (AGN) rise buoyantly through the IGM \citep[\emph{e.g.,} ][]{FabianEtAl00,NulsenEtAl05}, and the subsonic motion of the plasma in front of them is important, for example, for computing the evolution of the bubbles \citep[\emph{e.g.,} ][]{ChurazovEtAl01_M87Bubble}, and the draping of magnetic fields around them \citep{Lyutikov06, DursiPfrommer08, NaorKeshet15}. Large scale mergers lead to dense clumps moving subsonically or supersonically through the IGM, giving rise to dramatic effects such as shocks, cold fronts, and even a spatial separation between baryonic and dark matter components \citep{Vikhlininetal01, MarkevitchVikhlinin07}. Details such as the bow shock location and the downstream flow pattern are important for correctly interpreting the underlying dynamics. Consider first a supersonic clump moving through the AGN. A well known example is the 1E0657-56, so-called bullet, cluster at redshift $z=0.296$, showing a merger nearly in the plane of the sky \citep{MarkevitchEtAl02_Bullet, BarrenaEtAl02_Bullet}. The moving clump is seen as a bullet-shaped discontinuity, preceded by a bow shock with $g\simeq 3.0$ \citep{Markevitch06} and $\Delta\simeq 2.4\pm0.2$. Our analysis indicates that the large $\Delta$ corresponds to a weak shock, with $\tilde{M}\simeq 1.1$ (for $\gamma\simeq 5/3$, used henceforth). This is consistent with the $\sim 65^\circ$ asymptotic shock angle far from the nose, which also suggests a $\tilde{M}\simeq 1.1$ shock. However, the high compression ratio corresponds to a much stronger shock with $\tilde{M}\simeq 3$, indicating that the system is not in a steady state. Indeed, plotting the corresponding $\Delta(g)$ on Fig. \ref{Fig:Planets} would suggest an unrealistically soft equation of state. Simulations indicate that the shock velocity can be higher by a factor of $1.7$ \citep{SpringelFarrar07} or even $6$ \citep{MilosavljevicEtAl07_Bulllet} than expected from the clump velocity, because the shock (i) moves faster than the clump; and (ii) plows through gas that is infalling towards the clump \citep{SpringelFarrar07}. Evidently, Fig. \ref{Fig:Planets} provides a simple way to gauge the relaxation level of a system. Next consider the subsonic IGM flow in front of an AGN bubble or a slow clump. While previous studies \citep[\emph{e.g.,} ][]{Lyutikov06, DursiPfrommer08} have approximated the motion as incompressible, the inferred velocities are often nearly sonic \citep{ChurazovEtAl01_M87Bubble, MarkevitchVikhlinin07}, implying considerable compressibility effects. To illustrate this, we compute the magnetization caused by the draping of a weak upstream magnetic field around the moving object. \fixapj{The results are applicable only to weak magnetic fields, where Eqs.~(\ref{eq:FlowEquations}--\ref{eq:JumpConditions}) remain a good approximation.} The magnetic field generally evolves as $\bm{B}\propto \rho\bm{l}$, where $\bm{l}$ is a length element attached to the flow. Hence, the magnetic components initially perpendicular or parallel to the flow evolve along the axis of symmetry according to \begin{equation} \frac{B_{\perp}}{\tilde{B}_{\perp}} = \left(\frac{\rho/v}{\tilde{\rho}/\tilde{v}}\right)^{1/2} = \left(\frac{M_0}{\tilde{M}_0} \right)^{-1/2} \left(\frac{S^2-M_0^2}{S^2-\tilde{M}_0^2}\right)^{S^2/4} \end{equation} or \begin{equation} \frac{B_{||}}{\tilde{B}_{||}} = \frac{\rho v}{\tilde{\rho} \tilde{v}} = \frac{M_0}{\tilde{M}_0} \left( \frac{S^2-M_0^2}{S^2-\tilde{M}_0^2} \right)^{S^2/2} \, ; \end{equation} for a detailed discussion, see \citet{NaorKeshet15}. The resulting magnetic energy amplification is shown in Fig. \ref{Fig:MagPhaseSpace}, for $\gamma=5/3$, as a function of $\tilde{M}$ and of the normalized distance from the body, $\delta=(r-1)$. Near the object, the magnetization is predominantly perpendicular, and approximately given by \begin{equation}\label{eq:ApproxB} \frac{B_{\perp}}{\tilde{B}_{\perp}} \simeq \frac{1+1.3\tilde{M}^{2.6}}{3\delta} \mbox{ ,} \end{equation} as illustrated in the figure. \begin{figure} \centerline{\epsfxsize=8.5cm \epsfbox{\myfig{MagPhaseSpace300c.eps}}} \caption{ Energy amplification of a weak magnetic field initially perpendicular (solid contours, and $\log_{10}(B_{\perp}/\tilde{B}_{\perp})^2$ cube-helix \citep{Green11_Cubehelix} colormap) or parallel (dashed contours) to the subsonic $\gamma=5/3$ flow at a normalized distance $\delta=(r-1)$ in front of a sphere of Mach number $\tilde{M}$. Close to the sphere, the field is predominantly perpendicular, and approximately given (dot-dashed contours) by Eq.~(\ref{eq:ApproxB}). \label{Fig:MagPhaseSpace} } \end{figure} As the figure shows, the magnetized layer is typically a few times thicker for $\tilde{M}\simeq 1$ than it would appear in the incompressible limit. Such thick layers may have observational implications, through their non-thermal pressure and as synchrotron emission in front of nearly sonic objects. Such a synchrotron signal may contribute to the radio bright edges seen above AGN bubbles, for example in the Virgo cluster \citep{OwenEtAl00}. \section{Discussion} \label{sec:Discussion} The compressible, inviscid flow in front of a blunt object is approximated analytically, using a hodograph-like, $\vect{v}\simeq (-u, q(u)\theta,0)$ transformation. The velocity (Eq.~\ref{eq:uSolution}) and pressure (Eq.~\ref{eq:M0andPi}) profiles are derived by expanding $q$ as a (rapidly converging) power series in $u$ (Eq.~\ref{eq:qExpansion}), using the constraints imposed by the object (Eqs.~\ref{eq:ConstPotentialFlow} or \ref{eq:ConstGeneralFlow} for a sphere) and by the far upstream subsonic (Eqs.~\ref{eq:ConstSubsonicQ0}--\ref{eq:ConstSubsonicQ1}) or shocked supersonic (Eqs.~\ref{eq:q0d}--\ref{eq:q2d}) flow. In the latter case, the weak (Eq.~\ref{eq:TransonicXi}) and strong shock limits approximately fix the shock curvature (Eq.~\ref{eq:xiSeries}) and consequently the flow, independent of the object shape. Figure \ref{Fig:AllFlows} shows that a low order $q(u)$ expansion suffices to recover the measured flow in front of a sphere. The supersonic results also reproduce the measured standoff distance (solid curve and figure inset) of the shock, and constrain its curvature (Eq.~\ref{eq:xiSeries} or the fit Eq.~\ref{eq:xiFit}). Higher-order constraints can be used to improve the approximation further; here we used only the lowest-order constraint at stagnation, and only in the subsonic case. The axial approximation directly constrains the flow beyond the axis and along the body, as it determines the perpendicular derivatives. For example, one can use it to estimate $\partial_{\theta\theta}P = -\rho_0 [q^2-u\partial_r(rq)](1-\mystag{M}^2/S^2)^{1/(\gamma-1)}$, found by expanding Eqs.~(\ref{eq:FlowEquations}) to $\theta^2$ order. Extrapolation beyond the axis is simpler in the potential flow regime, where, in particular, $\partial_{\theta\theta}v_r=\partial_r(rq)$. The axial analysis is generalized for any blunt, axisymmetric object, by modifying the $q$ boundary conditions. For a body with radius of curvature $R_b>0$ at a stagnation radius $r_b$, take $\{z\equiv r\cos\theta=R_b-r_b, \varrho\equiv r\sin\theta=0\}$ as the origin, and rescale lengths by $R_b$. This maps the stagnation region of the body onto that of the unit sphere, so Eqs.~(\ref{eq:Bernoulli}--\ref{eq:ConstPotentialFlow}, \ref{eq:ConstSubsonicQ0}--\ref{eq:q2d}) remain valid. The subsonic analysis is unchanged; for an asymmetric body, $\alpha$ may need to be altered, \emph{e.g.,} using the Janzen-Rayleigh series. The supersonic analysis is also unchanged, if Eq.~(\ref{eq:ConstGeneralFlow}) is used and adapted for the specific body. The alternative use of Eq.~(\ref{eq:xiSeries}) or Eq.~(\ref{eq:xiFit}) is still expected to hold, although higher order terms or a tuned $\beta$ may be needed if an aspherical body modifies the weak or strong shock limits. It may be possible to generalize our hodograph-like analysis even for a non-axisymmetric object, using the stagnant streamline instead of the symmetry axis, as long as the corresponding $u$ profile remains monotonic. Our analysis is applicable to a wide range of subsonic and supersonic astronomical bodies. Illustrative examples are discussed (in \S\ref{sec:Astro}), on both small, planetary scales, and large, galaxy cluster scales. In particular, plotting the standoff distance as a function of the compression ratio (Fig. \ref{Fig:Planets}) can be used to gauge the equation of state and the relaxation level of the system. The results are especially useful for nearly sonic flows, where compressiblity effects play an important role; this is seen for example in the thicker magnetically draped layers that form in front of a moving body (Fig. \ref{Fig:MagPhaseSpace}), such as a large scale clump or an AGN bubble. \acknowledgements We thank Ephim Golbraikh and Yuri Lyubarsky for helpful advice. This research has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n\textordmasculine ~293975, from an IAEC-UPBC joint research foundation grant, from an ISF-UGC grant, and from an individual ISF grant.
1401.2195
\section{Introduction} Given oracle access to a metric space $(\{1,2,\ldots,n\},d)$, the {\sc metric $1$-median} problem asks for a point with the minimum average distance to all points. Indyk~\cite{Ind99, Ind00} shows that {\sc metric $1$-median} has a Monte-Carlo $O(n/\epsilon^2)$-time $(1+\epsilon)$-approximation algorithm with an $\Omega(1)$ probability of success. The more general {\sc metric $k$-median} problem asks for $x_1$, $x_2$, $\ldots$, $x_k\in\{1,2,\ldots,n\}$ minimizing $\sum_{x\in\{1,2,\ldots,n\}}\,\min_{i=1}^k\,d(x_i,x)$. Randomized as well as evasive algorithms are well-studied for {\sc metric $k$-median} and the related $k$-means problem~\cite{GMMMO03, MP04, AGKMMP04, Che09, KSS10, JKS12}, where $k\ge 1$ is part of the input rather than a constant. This paper focuses on {\em deterministic sublinear-query} algorithms for {\sc metric $1$-median}. Guha et al.~\cite[Sec.~3.1--3.2]{GMMMO03} prove that {\sc metric $k$-median} has a deterministic $O(n^{1+\epsilon})$-time $O(n^\epsilon)$-space $2^{O(1/\epsilon)}$-approximation algorithm that reads distances in a single pass, where $\epsilon>0$. Chang~\cite{Cha13} presents a deterministic nonadaptive $O(n^{1.5})$-time $4$-approximation algorithm for {\sc metric $1$-median}. Wu~\cite{Wu14} generalizes Chang's result by showing an $O(n^{1+1/h})$-time $2h$-approximation algorithm for any integer $h\ge 2$. On the negative side, Chang~\cite{Cha12} shows that {\sc metric $1$-median} has no deterministic $o(n^2)$-query $(3-\epsilon)$-approximation algorithms for any constant $\epsilon>0$~\cite{Cha12}. This paper improves upon his result by showing that {\sc metric $1$-median} has no deterministic $o(n^2)$-query $(4-\epsilon)$-approximation algorithms for any constant $\epsilon>0$. In social network analysis, the importance of an actor in a network may be quantified by several centrality measures, among which the closeness centrality of an actor is defined to be its average distance to other actors~\cite{WF94}. So {\sc metric $1$-median} can be interpreted as the problem of finding the most important point in a metric space. Goldreich and Ron~\cite{GR08} and Eppstein and Wang~\cite{EW04} present randomized algorithms for approximating the closeness centralities of vertices in undirected graphs. \section{Definitions}\label{definitionssection} For $n\in\mathbb{N}$, denote $[n]\equiv \{1,2,\ldots,n\}$. Trivially, $[0]=\emptyset$. An $n$-point metric space $([n],d)$ is the set $[n]$, called the groundset, endowed with a function $d\colon [n]\times[n]\to\mathbb{R}$ satisfying \begin{enumerate}[(1)] \item\label{nonnegative} $d(x,y)\ge 0$ (non-negativeness), \item $d(x,y)=0$ if and only if $x=y$ (identity of indiscernibles), \item\label{symmetry} $d(x,y)=d(y,x)$ (symmetry), and \item $d(x,y)+d(x,z)\ge d(y,z)$ (triangle inequality) \end{enumerate} for all $x$, $y$, $z\in [n]$. An equivalent definition requires the triangle inequality only for distinct $x$, $y$, $z\in [n]$, axioms~(\ref{nonnegative})--(\ref{symmetry}) remaining. An algorithm with oracle access to a metric space $([n],d)$ is given $n$ and may query $d$ on any $(x,y)\in[n]\times[n]$ to obtain $d(x,y)$. Without loss of generality, we forbid queries for $d(x,x)$, which trivially return $0$, as well as repeated queries, where a query for $d(x,y)$ is considered to repeat that for $d(y,x)$. For convenience, denote an algorithm ALG with oracle access to $([n],d)$ by $\text{ALG}^d$. Given oracle access to a finite metric space $([n],d)$, the {\sc metric $1$-median} problem asks for a point in $[n]$ with the minimum average distance to all points. An algorithm for this problem is $\alpha$-approximate if it outputs a point $x\in[n]$ satisfying $$\sum_{y\in[n]}\,d\left(x,y\right) \le\alpha\,\min_{x^\prime\in[n]}\,\sum_{y\in[n]}\,d\left(x^\prime,y\right),$$ where $\alpha\ge 1$. The following theorem is due to Chang~\cite{Cha13} and generalized by Wu~\cite{Wu14}. \begin{theorem}[{\cite{Cha13, Wu14}}]\label{nonadaptiveupperbound} {\sc Metric $1$-median} has a deterministic nonadaptive $O(n^{1.5})$-time $4$-approximation algorithm. \end{theorem} \section{Lower bound} Fix arbitrarily a deterministic $o(n^2)$-query algorithm $A$ for {\sc metric $1$-median} and a constant $\delta\in(0,0.1)$. By padding queries, we may assume the existence of a function $q\colon\mathbb{Z}^+\to\mathbb{Z}^+$ such that $A$ makes exactly $q(n)=o(n^2)$ queries given oracle access to any metric space with groundset $[n]$. We introduce some notations concerning a function $d\colon[n]\times[n]\to\mathbb{R}$ to be determined later. For $i\in[q(n)]$, denote the $i$th query of $A^d$ by $(x_i,y_i)\in[n]\times[n]$; in other words, the $i$th query of $A^d$ asks for $d(x_i,y_i)$. Note that $(x_i,y_i)$ depends only on $d(x_1,y_1)$, $d(x_2,y_2)$, $\ldots$, $d(x_{i-1},y_{i-1})$ because $A$ is deterministic and has been fixed. For $x\in[n]$ and $i\in\{0,1,\ldots,q(n)\}$, \begin{eqnarray} N_i(x) &\stackrel{\text{def.}}{=}& \left\{ y\in[n]\mid \left\{ \left(x,y\right), \left(y,x\right) \right\} \cap \left\{\left(x_j,y_j\right)\mid j\in\left[i\right]\right\} \neq\emptyset \right\}, \label{neighborhoodinsubgraph}\\ \alpha_i(x) &\stackrel{\text{def.}}{=}& \left|\, N_i(x) \,\right|, \label{numberoffrozenincidentdistances} \end{eqnarray} following Chang~\cite{Cha12} with a slight change in notation. Equivalently, $\alpha_i(x)$ is the degree of $x$ in the undirected graph with vertex set $[n]$ and edge set $\{(x_j,y_j)\mid j\in[i]\}$. As $[0]=\emptyset$, $\alpha_0(x)=0$ for $x\in[n]$. Note that $\alpha_i(\cdot)$ depends only on $(x_1,y_1)$, $(x_2,y_2)$, $\ldots$, $(x_i,y_i)$. Denote the output of $A^d$ by $p$. By adding at most $n-1=o(n^2)$ dummy queries, we may assume without loss of generality that \begin{eqnarray} \left(p,y\right)\in\left\{\left(x_i,y_i\right)\mid i\in\left[q(n)\right]\right\} \label{algorithmoutputheavilyqueried} \end{eqnarray} for all $y\in[n]\setminus\{p\}$. Consequently, \begin{eqnarray} \alpha_{q(n)}(p)=n-1.\label{outputallasked} \end{eqnarray} Fix any set $S\subseteq [n]$ of size $\lceil\delta n\rceil$, e.g., $S=[\lceil\delta n\rceil]$. We proceed to construct $d$ by gradually freezing distances. For brevity, freezing the value of $d(x,y)$ implicitly freezes $d(y,x)$ to the same value, where $x$, $y\in[n]$. Inductively, having answered the first $i-1$ queries of $A^d$ by freezing $d(x_1,y_1)$, $d(x_2,y_2)$, $\ldots$, $d(x_{i-1},y_{i-1})$, where $i\in[q(n)]$, answer the $i$th query by \begin{eqnarray} d\left(x_i,y_i\right) &=&\left\{ \begin{array}{ll} 3, &\text{if $x_i$, $y_i\in S$;}\\ 3, &\text{if $x_i\in S$, $y_i\notin S$ and $\alpha_{i-1}(x_i)\le\delta n$;}\\ 3, &\text{if $y_i\in S$, $x_i\notin S$ and $\alpha_{i-1}(y_i)\le\delta n$;}\\ 4, &\text{if $x_i\in S$, $y_i\notin S$ and $\alpha_{i-1}(x_i)>\delta n$;}\\ 4, &\text{if $y_i\in S$, $x_i\notin S$ and $\alpha_{i-1}(y_i)>\delta n$;}\\ 2, &\text{if $x_i$, $y_i\notin S$ and $\max\{\alpha_{i-1}(x_i),\alpha_{i-1}(y_i)\}\le\delta n$;}\\ 4, &\text{if $x_i$, $y_i\notin S$ and $\max\{\alpha_{i-1}(x_i),\alpha_{i-1}(y_i)\}>\delta n$.} \end{array} \right. \label{distanceassignment} \end{eqnarray} It is not hard to verify that the seven cases in equation~(\ref{distanceassignment}) are exhaustive and mutually exclusive. We have now frozen $d(x_i,y_i)$ for all $i\in[q(n)]$ and none of the other distances. As repeated queries are forbidden, equation~(\ref{distanceassignment}) does not freeze one distance twice, preventing inconsistency. Set \begin{eqnarray} B &\stackrel{\text{def.}}{=}& \left\{ x\in[n]\mid \alpha_{q(n)}(x)>\delta n \right\},\label{badpoints}\\ \hat{p} &\stackrel{\text{def.}}{=}& \mathop{\rm argmin}_{x\in S}\, \alpha_{q(n)}(x), \label{trueoptimal} \end{eqnarray} breaking ties arbitrarily. For all distinct $x$, $y\in[n]$ with $(x,y)$, $(y,x)\notin\{(x_i,y_i)\mid i\in[q(n)]\}$, let \begin{eqnarray} d\left(x,y\right) = \left\{ \begin{array}{ll} 1, &\text{if $x=\hat{p}$, $y\notin S\cup B$;}\\ 1, &\text{if $y=\hat{p}$, $x\notin S\cup B$;}\\ 3, &\text{if $x$, $y\in S\cup B$;}\\ 4, &\text{if $x\in (S\cup B)\setminus \{\hat{p}\}$ and $y\notin (S\cup B\cup\{\hat{p}\})$;}\\ 4, &\text{if $y\in (S\cup B)\setminus \{\hat{p}\}$ and $x\notin (S\cup B\cup\{\hat{p}\})$;}\\ 2, &\text{otherwise.} \end{array} \right. \label{completingthemetric} \end{eqnarray} Clearly, the six cases in equation~(\ref{completingthemetric}) are exhaustive and mutually exclusive. Furthermore, equation~(\ref{completingthemetric}) assigns the same value to $d(x,y)$ and $d(y,x)$. Finally, for all $x\in[n]$, \begin{eqnarray} d\left(x,x\right)=0.\label{trivialdistance} \end{eqnarray} Equations~(\ref{distanceassignment}),~(\ref{completingthemetric})~and~(\ref{trivialdistance}) complete the construction of $d$ by freezing all distances. The following lemma is straightforward. \begin{lemma}\label{distancesarezeroto4} For all distinct $x$, $y\in[n]$, $d(x,y)\in\{1,2,3,4\}$. \end{lemma} Below is an immediate consequence of equation~(\ref{trueoptimal}). \begin{lemma}\label{optimalisinpreservedregion} $\hat{p}\in S$. \end{lemma} The following lemma is a consequence of equations~(\ref{neighborhoodinsubgraph})--(\ref{numberoffrozenincidentdistances}) and our forbidding repeated queries. \begin{lemma}\label{monotonicity} For all $x\in[n]$ and $i\in[q(n)]$, \begin{eqnarray*} \alpha_i(x)-\alpha_{i-1}(x) =\left\{ \begin{array}{ll} 0, &\text{if $x\notin \{x_i,y_i\}$;}\\ 1, &\text{otherwise.} \end{array} \right. \end{eqnarray*} \end{lemma} \begin{proof} The case of $x\notin \{x_i,y_i\}$ is immediate from equations~(\ref{neighborhoodinsubgraph})--(\ref{numberoffrozenincidentdistances}). Suppose that $x\in \{x_i,y_i\}$. By symmetry, we may assume $x=x_i$. So by equation~(\ref{neighborhoodinsubgraph}), \begin{eqnarray} N_i(x)=N_{i-1}(x)\cup\left\{y_i\right\}.\label{newneighbor} \end{eqnarray} As $(x,y_i)=(x_i,y_i)$ is the $i$th query and we forbid repeated queries, \begin{eqnarray} y_i\notin N_{i-1}(x)\label{reallynewneighbor} \end{eqnarray} by equation~(\ref{neighborhoodinsubgraph}).\footnote{In detail, if $y_i\in N_{i-1}(x)$, then $(x_j,y_j)\in\{(x,y_i),(y_i,x)\}$ for some $j\in[i-1]$ by equation~(\ref{neighborhoodinsubgraph}); hence the $i$th query $(x_i,y_i) =(x,y_i)$ repeats the $j$th query, a contradiction.} Equations~(\ref{numberoffrozenincidentdistances})~and~(\ref{newneighbor})--(\ref{reallynewneighbor}) complete the proof. \end{proof} In short, Lemma~\ref{monotonicity} says that adding the edge $(x_i,y_i)$ to an undirected graph without that edge increases the degree of $x$ by $1$ if and only if $x\in\{x_i,y_i\}$. \begin{lemma}\label{monotonicitysame} For all $x\in[n]$ and $i\in[q(n)+1]$, if $\alpha_{i-1}(x)>\delta n$, then $x\in B$. \end{lemma} \begin{proof} By Lemma~\ref{monotonicity}, $\alpha_{q(n)}(x)\ge \alpha_{i-1}(x)$. Invoking equation~(\ref{badpoints}) then completes the proof. \end{proof} \begin{lemma}\label{sumofdegrees} $$\sum_{x\in[n]}\, \alpha_{q(n)}(x)= 2\, q(n).$$ \end{lemma} \begin{proof} Recall that the left-hand side is the sum of degrees in the undirected graph with vertex set $[n]$ and edge set $\{(x_i,y_i)\mid i\in[q(n)]\}$. As we forbid repeated queries, $\left|\,\{(x_i,y_i)\mid i\in[q(n)]\}\,\right|=q(n)$ Finally, it is a basic fact in graph theory that the sum of degrees in an undirected graph equals twice the number of edges. \end{proof} \begin{lemma}[{Implicit in~\cite[Lemma~13]{Cha12}}]\label{fewbadpoints} $|B|=o(n)$. \end{lemma} \begin{proof} We have $$ |B|\,\delta n \stackrel{\text{equation~(\ref{badpoints})}}{\le} \sum_{x\in B} \alpha_{q(n)}(x) \le \sum_{x\in [n]} \alpha_{q(n)}(x) \stackrel{\text{Lemma~\ref{sumofdegrees}}}{=} 2\,q(n). $$ This gives $|B|=o(n)$ as $\delta\in(0,0.1)$ is a constant and $q(n)=o(n^2)$. \end{proof} \begin{lemma}\label{sparselyaskedpoint} For all sufficiently large $n$ and all $i\in[q(n)+1]$, \begin{eqnarray} \alpha_{i-1}\left(\hat{p}\right)&\le& \delta n. \label{sparselyaskedpointequation} \end{eqnarray} \end{lemma} \begin{proof} By Lemma~\ref{fewbadpoints}, $|S|=\lceil\delta n\rceil$ and $\delta\in(0,0.1)$ being a constant, $S\setminus B\neq\emptyset$ for all sufficiently large $n$. By equation~(\ref{badpoints}), $S\setminus B\neq\emptyset$ $\alpha_{q(n)}(x)\le \delta n$ for some $x\in S$, which together with equation~(\ref{trueoptimal}) gives $\alpha_{q(n)}(\hat{p})\le\delta n$. Finally, Lemma~\ref{monotonicity} and $\alpha_{q(n)}(\hat{p})\le\delta n$ imply inequality~(\ref{sparselyaskedpointequation}) for all $i\in[q(n)+1]$. \comment Furthermore, $$ |S|\cdot\alpha_{q(n)}\left(\hat{p}\right) \stackrel{\text{equation~(\ref{trueoptimal})}}{\le} \sum_{x\in S}\, \alpha_{q(n)}(x) \le \sum_{x\in[n]}\, \alpha_{q(n)}(x) \stackrel{\text{Lemma~\ref{sumofdegrees}}}{=} 2\cdot q(n). $$ So $\alpha_{q(n)}(\hat{p})\le 2\cdot q(n)/|S|=o(n)$, which together with Lemma~\ref{monotonicity} shows $\alpha_{i-1}(\hat{p}) \le\alpha_{q(n)}(\hat{p}) \le\delta n$ for all sufficiently large $n$ and all $i\in[q(n)+1]$. \end{proof} Henceforth, assume $n$ to be sufficiently large to satisfy inequality~(\ref{sparselyaskedpointequation}) for all $i\in[q(n)+1]$. \begin{lemma}\label{onlysourceofdistance1} For all $x$, $y\in[n]$, if $d(x,y)=1$, then one of the following conditions is true: \begin{itemize} \item $x=\hat{p}$ and $y\notin S\cup B$; \item $y=\hat{p}$ and $x\notin S\cup B$. \end{itemize} \end{lemma} \begin{proof} Inspect equation~(\ref{completingthemetric}), which is the only equation that may set distances to $1$. \end{proof} \comment Below is a consequence of Lemma~\ref{onlysourceofdistance1}. \begin{lemma}\label{sourceofshortdistances} For all distinct $x$, $y$, $z\in[n]$, if $d(x,y)=d(x,z)=1$, then $x=\hat{p}$ and $y$, $z\notin S\cup B$. \end{lemma} \begin{lemma}\label{ordinarydistancesare2} For all distinct $x$, $y\in[n]\setminus (S\cup B)$, $d\left(x,y\right)=2$. \end{lemma} \begin{proof} By Lemma~\ref{monotonicitysame}, $\max\{\alpha_{i-1}(x_i),\alpha_{i-1}(y_i)\}>\delta n$ means $\{x_i,y_i\}\cap B\neq\emptyset$, where $i\in[q(n)+1]$. So only the second-to-last case in equation~(\ref{distanceassignment}), which sets $d(x_i,y_i)=2$, may be consistent with $x_i$, $y_i\notin S\cup B$. By Lemma~\ref{optimalisinpreservedregion}, $\hat{p}\in S$. So only the last case in equation~(\ref{completingthemetric}), which sets $d(x,y)=2$, may be consistent with $x$, $y\notin S\cup B$. \end{proof} \comment \begin{lemma}\label{askeddistancesincidentonoptimalpoint} For all $i\in[q(n)]$, $\alpha_{i-1}(\hat{p})\le\delta n$. \end{lemma} \begin{proof} By Lemma~\ref{sparselyaskedpoint}, $\hat{p}\in S\setminus B$. $\hat{p}\notin B$, which together with equation~(\ref{badpoints}) and Lemma~\ref{monotonicity} completes the proof. \end{proof} \begin{lemma}\label{optimalpointdistances1or3} For all $x\in[n]\setminus\{\hat{p}\}$, $d(\hat{p},x)\in\{1,3\}$. \end{lemma} \begin{proof} By Lemma~\ref{optimalisinpreservedregion} and inequality~(\ref{sparselyaskedpointequation}), only the first three cases in equation~(\ref{distanceassignment}), which set $d(x_i,y_i)=3$, may be consistent with $x_i=\hat{p}$ or $y_i=\hat{p}$. Again by Lemma~\ref{optimalisinpreservedregion}, only the first three cases in equation~(\ref{completingthemetric}), which set $d(x,y)\in\{1,3\}$, may be consistent with $x=\hat{p}$ or $y=\hat{p}$. \end{proof} \begin{lemma}\label{illegaldistances1} There do not exist distinct $x$, $y$, $z\in[n]$ with $d(x,y)=1$ and $\{d(x,z), d(y,z)\}=\{2, 4\}$. \end{lemma} \begin{proof} By Lemma~\ref{onlysourceofdistance1}, $d(x,y)=1$ implies $\hat{p}\in\{x,y\}$. By symmetry, assume $x=\hat{p}$. Then $d(x,z)\in\{1,3\}$ by Lemma~\ref{optimalpointdistances1or3}. \end{proof} \begin{lemma}\label{illegaldistances2} There do not exist distinct $x$, $y$, $z\in[n]$ with $d(x,y)=d(x,z)=1$ and $d(y,z)\in\{3, 4\}$. \end{lemma} \begin{proof} By Lemma~\ref{onlysourceofdistance1}, $d(x,y)=d(x,z)=1$ implies $x=\hat{p}$ and $y$, $z\notin S\cup B$. Then $d(y,z)=2$ by Lemma~\ref{ordinarydistancesare2}. \end{proof} Lemmas~\ref{illegaldistances1}--\ref{illegaldistances2} forbid all possible violations of the triangle inequality, yielding the following lemma. \begin{lemma}\label{itismetric} $([n],d)$ is a metric space. \end{lemma} \begin{proof} Lemmas~\ref{distancesarezeroto4}~and~\ref{illegaldistances1}--\ref{illegaldistances2} establish the triangle inequality for $d$. Furthermore, $d$ is symmetric because (1)~freezing $d(x,y)$ automatically freezes $d(y,x)$ to the same value, (2)~forbidding repeated queries prevents equation~(\ref{distanceassignment}) from assigning inconsistent values to one distance and (3)~equation~(\ref{completingthemetric}) is symmetric. All the other axioms for metrics are easy to verify. \end{proof} Recall that $p$ denotes the output of $A^d$. We proceed to compare $\sum_{x\in[n]}\,d(p,x)$ with $\sum_{x\in[n]}\,d(\hat{p},x)$. \begin{lemma}\label{identifyingjumps} There exist $k(1)$, $k(2)$, $\ldots$, $k(n-1)\in [q(n)]$ and distinct $z_{k(1)}$, $z_{k(2)}$, $\ldots$, $z_{k(n-1)}\in[n]$ such that \begin{eqnarray} \alpha_{k(t)-1}(p)&=&t-1,\label{beforejumping}\\\ \alpha_{k(t)}(p)&=&t,\label{afterjumping}\\ \left(p,z_{k(t)}\right) &\in&\left\{\left(x_{k(t)}, y_{k(t)}\right), \left(y_{k(t)}, x_{k(t)}\right) \right\} \label{algorithmoutputparticipate} \end{eqnarray} for all $t\in[n-1]$. \end{lemma} \begin{proof} By Lemma~\ref{monotonicity}, equation~(\ref{outputallasked}) and the easy fact that $\alpha_0(p)=0$, there exist distinct $k(1)$, $k(2)$, $\ldots$, $k(n-1)\in [q(n)]$ satisfying equations~(\ref{beforejumping})--(\ref{afterjumping}) for all $t\in[n-1]$.\footnote{Observe that $\alpha_i(p)$ must go through all of $0$, $1$, $\ldots$, $n-1$ as $i$ increases from $0$ to $q(n)$.} Lemma~\ref{monotonicity} and equations~(\ref{beforejumping})--(\ref{afterjumping}) imply $p\in\{x_{k(t)}, y_{k(t)}\}$, establishing the existence of $z_{k(t)}$ satisfying equation~(\ref{algorithmoutputparticipate}). If $z_{k(1)}$, $z_{k(2)}$, $\ldots$, $z_{k(n-1)}$ are not distinct, then there are repeated queries by equation~(\ref{algorithmoutputparticipate}), a contradiction. \end{proof} From now on, let $k(1)$, $k(2)$, $\ldots$, $k(n-1)\in [q(n)]$ and distinct $z_{k(1)}$, $z_{k(2)}$, $\ldots$, $z_{k(n-1)}\in[n]$ satisfy equations~(\ref{beforejumping})--(\ref{algorithmoutputparticipate}) for all $t\in[n-1]$. \begin{lemma}\label{algorithmoutputtypicaldistances} For each $t\in[n-1]$, if $t\ge \lceil\delta n\rceil+2$ and $z_{k(t)}\notin S$, then $d(p,z_{k(t)})=4$. \end{lemma} \begin{proof} Assume in equation~(\ref{algorithmoutputparticipate}) that $p=x_{k(t)}$ and $z_{k(t)}=y_{k(t)}$; the other case will be symmetric. By equation~(\ref{beforejumping}), \begin{eqnarray} \alpha_{k(t)-1}\left(x_{k(t)}\right)=t-1> \delta n. \label{algorithmoutputverymuchasked} \end{eqnarray} \begin{enumerate}[{Case }1:] \item $x_{k(t)}\in S$. By equation~(\ref{distanceassignment}), $x_{k(t)}\in S$ and $y_{k(t)}=z_{k(t)}\notin S$, \begin{eqnarray} d\left(x_{k(t)},y_{k(t)}\right) = \left\{ \begin{array}{ll} 3, &\text{if $\alpha_{k(t)-1}(x_{k(t)})\le \delta n$;}\\ 4, &\text{if $\alpha_{k(t)-1}(x_{k(t)})> \delta n$.} \end{array} \right. \label{distancesfromalgorithmoutput1} \end{eqnarray} \item $x_{k(t)}\notin S$. By equation~(\ref{distanceassignment}), $x_{k(t)}\notin S$ and $y_{k(t)}=z_{k(t)}\notin S$, \begin{eqnarray} d\left(x_{k(t)},y_{k(t)}\right) = \left\{ \begin{array}{ll} 2, &\text{if $\max\{\alpha_{k(t)-1}(x_{k(t)}),\alpha_{k(t)-1}(y_{k(t)})\}\le \delta n$;}\\ 4, &\text{if $\max\{\alpha_{k(t)-1}(x_{k(t)}),\alpha_{k(t)-1}(y_{k(t)})\}> \delta n$.} \end{array} \right. \label{distancesfromalgorithmoutput2} \end{eqnarray} \end{enumerate} Equation~(\ref{algorithmoutputverymuchasked}) together with any one of equations~(\ref{distancesfromalgorithmoutput1})--(\ref{distancesfromalgorithmoutput2}) implies $d(x_{k(t)},y_{k(t)})=4$. Hence $d(p,z_{k(t)})=d(x_{k(t)},y_{k(t)})=4$. \end{proof} We are now able to analyze the quality of $p$ as a solution to {\sc metric $1$-median}. \begin{lemma}\label{analyzingalgorithmoutputassuboptimal} $$\sum_{x\in[n]}\,d\left(p,x\right)\ge 4\left(n-2\left\lceil\delta n\right\rceil-2\right).$$ \end{lemma} \begin{proof} By the distinctness of $z_{k(1)}$, $z_{k(2)}$, $\ldots$, $z_{k(n-1)}$ in Lemma~\ref{identifyingjumps}, \begin{eqnarray} \sum_{x\in[n]}\,d\left(p,x\right) \ge\sum_{t\in[n-1]}\,d\left(p,z_{k(t)}\right). \label{initialinequalityinanalyzingalgorithmoutput} \end{eqnarray} Write $A=\{t\in[n-1]\mid z_{k(t)}\in S\}$. As $z_{k(1)}$, $z_{k(2)}$, $\ldots$, $z_{k(n-1)}$ are distinct, \begin{eqnarray} |A|\le |S|. \end{eqnarray} Furthermore, \begin{eqnarray} &&\sum_{t\in[n-1]}\,d\left(p,z_{k(t)}\right)\nonumber\\ &\ge&\sum_{t\in[n-1],\,t\ge \lceil\delta n\rceil+2,\,t\notin A}\, d\left(p,z_{k(t)}\right)\nonumber\\ &\stackrel{\text{Lemma~\ref{algorithmoutputtypicaldistances}}}{=}& \sum_{t\in[n-1],\,t\ge \lceil\delta n\rceil+2,\,t\notin A}\, 4\nonumber\\ &\ge& 4\left(n-\left\lceil\delta n\right\rceil-2-|A|\right). \label{lastinequalityinanalyzingalgorithmoutput} \end{eqnarray} Equations~(\ref{initialinequalityinanalyzingalgorithmoutput})--(\ref{lastinequalityinanalyzingalgorithmoutput}) and $|S|=\lceil\delta n\rceil$ complete the proof. \end{proof} We now analyze the quality of $\hat{p}$ as a solution to {\sc metric $1$-median}. The following lemma is immediate from equation~(\ref{completingthemetric}). \begin{lemma}\label{optimalpointhasmanydistancesbeing1} For all $y\in [n]\setminus(S\cup B)$, if $y\neq \hat{p}$ and $(\hat{p},y)$, $(y,\hat{p})\notin \{(x_j,y_j)\mid j\in[q(n)]\}$, then $d(\hat{p},y)=1$. \end{lemma} \begin{lemma}\label{analyzingoptimalpoint} $$\sum_{y\in[n]}\,d\left(\hat{p},y\right) \le n+3\cdot\left(\left\lceil\delta n\right\rceil+o(n)+\delta n\right). $$ \end{lemma} \begin{proof} By equation~(\ref{neighborhoodinsubgraph}), $$N_{q(n)}\left(\hat{p}\right) =\left\{ y\in[n]\mid \left\{ \left(\hat{p},y\right), \left(y,\hat{p}\right) \right\} \cap \left\{\left(x_j,y_j\right)\mid j\in\left[q(n)\right]\right\} \neq\emptyset \right\}.$$ This and Lemma~\ref{optimalpointhasmanydistancesbeing1} imply $d(\hat{p},y)=1$ for all $y\in [n]\setminus(S\cup B)$ with $y\neq \hat{p}$ and $y\notin N_{q(n)}(\hat{p})$. Therefore, \begin{eqnarray} \sum_{y\in [n]\setminus(S\cup B\cup N_{q(n)}(\hat{p}))}\,d\left(\hat{p},y\right) \le n-\left|\,S\cup B\cup N_{q(n)}\left(\hat{p}\right)\,\right|. \label{distance1parts} \end{eqnarray} Clearly, \begin{eqnarray} \sum_{y\in S\cup B\cup N_{q(n)}(\hat{p})}\,d\left(\hat{p},y\right) \stackrel{\text{Lemma~\ref{distancesarezeroto4}}}{\le} \sum_{y\in S\cup B\cup N_{q(n)}(\hat{p})}\,4 = 4\cdot\left|\,S\cup B\cup N_{q(n)}\left(\hat{p}\right)\,\right| \label{largedistancesparts} \end{eqnarray} Furthermore, \begin{eqnarray} \left|\,N_{q(n)}\left(\hat{p}\right)\,\right| \stackrel{\text{equation~(\ref{numberoffrozenincidentdistances})}}{=} \alpha_{q(n)}\left(\hat{p}\right) \stackrel{\text{inequality~(\ref{sparselyaskedpointequation})}}{\le} \delta n.\nonumber \end{eqnarray} This and Lemma~\ref{fewbadpoints} imply \begin{eqnarray} \left|\,S\cup B\cup N_{q(n)}\left(\hat{p}\right)\,\right| \le \left\lceil\delta n\right\rceil+o(n)+\delta n \label{fewneighborsforoptimalpoint} \end{eqnarray} as $|S|=\lceil\delta n\rceil$. To complete the proof, sum up inequalities~(\ref{distance1parts})--(\ref{largedistancesparts}) and then use inequality~(\ref{fewneighborsforoptimalpoint}) in the trivial way. \end{proof} Combining Lemmas~\ref{itismetric},~\ref{analyzingalgorithmoutputassuboptimal}~and~\ref{analyzingoptimalpoint} yields our main theorem, stated below. \begin{theorem}\label{maintheorem} {\sc Metric $1$-median} has no deterministic $o(n^2)$-query $(4-\epsilon)$-approximation algorithm for any constant $\epsilon>0$. \end{theorem} \begin{proof} Lemma~\ref{itismetric} asserts that $([n],d)$ is a metric space. By Lemmas~\ref{analyzingalgorithmoutputassuboptimal}~and~\ref{analyzingoptimalpoint}, $$ \sum_{x\in[n]}\,d\left(p,x\right) \ge 4\left(1-8\delta-o(1)\right)\sum_{x\in[n]}\, d\left(\hat{p},x\right). $$ This proves the theorem because the deterministic $o(n^2)$-query algorithm $A$ and the constant $\delta\in(0,0.1)$ are picked arbitrarily (note that $p$ denotes the output of $A^d$). \end{proof} Theorem~\ref{maintheorem} complements Theorem~\ref{nonadaptiveupperbound}. It is possible to simplify equation~(\ref{completingthemetric}) at the expensive of an additional assumption. Without loss of generality, we may assume that $\alpha_{q(n)}(x)=n-1$ for all $x\in B$; this increases the query complexity by a multiplicative factor of $O(1)$ by equation~(\ref{badpoints}). Therefore, if $x\in B$ or $y\in B$, then $d(x,y)$ will be frozen by equation~(\ref{distanceassignment}). So the third to fifth cases in equation~(\ref{completingthemetric}), which satisfies $x\in B$ or $y\in B$, can be omitted. \comment Define $$Q\equiv \left\{ \text{unordered pair } \left(x,y\right)\in[n]^2\mid A^d \text{ ever queries for } d\left(x,y\right) \right\}$$ to be the set of queries of $A^d$ treated as unordered pairs. Without loss of generality, assume $(x,x)\notin Q$ for all $x\in[n]$. Let $G=([n],Q)$ be the simple undirected graph with vertex set $[n]$ and edge set $Q$. Denote the degree of $x\in[n]$ in $G$ by $\text{deg}_G(x)=|\{y\in[n]\mid (x,y)\in Q\}|$, and \begin{eqnarray} B\equiv \left\{x\in[n]\mid \text{deg}_G(x)\ge\epsilon n\right\}. \label{setofbadvertices} \end{eqnarray} In the sequel, we will specify $d$ incrementally in several steps. Note that $Q$ and $B$ are independent of $d$ because of the nonadaptivity of $A$; hence they will remain intact during our specification of $d$. Below is an easy lemma. \begin{lemma}[{Implicit in~\cite{Cha12}}]\label{numberofbadvertices} $|\,B\,|=o(n)$. \end{lemma} \begin{proof} We have $$ \epsilon n\, |\,B\,| = \sum_{x\in B}\,\epsilon n \stackrel{\text{Eq.~(\ref{setofbadvertices})}}{\le} \sum_{x\in B}\,\text{deg}_G(x) \le \sum_{x\in[n]}\,\text{deg}_G(x) =2\,|\,Q\,|,$$ where the last equality follows from the well-known fact that the sum of degrees in an undirected graph is twice the number of edges. This completes the proof because $|\,Q\,|=o(n^2)$ is $A$'s query complexity and $\epsilon$ is a constant. \end{proof} Henceforth we will assume $n\in\mathbb{Z}^+$ to be sufficiently large so that \begin{eqnarray} n-|\,B\,|-1-\epsilon n>0 \label{numberofremainingpointstobemadegood} \end{eqnarray} by Lemma~\ref{numberofbadvertices}. For all $x\in[n]$, \begin{eqnarray} d\left(x,x\right)\equiv 0. \label{zerodistancestoself} \end{eqnarray} For all $(x,y)\in[n]^2\setminus\{(x,x)\mid x\in[n]\}$ with $x\in B$, $y\in B$ or $(x,y)\in Q$, \begin{eqnarray} d(x,y)\equiv \left\{ \begin{array}{ll} 4, &\text{if }x\in B \text{ or } y\in B;\\ 2, &\text{otherwise}. \end{array} \right. \label{distancesonquerysetandbadvertices} \end{eqnarray} Clearly, this does not assign different values to $d(x,y)$ and $d(y,x)$. As Eq.~(\ref{distancesonquerysetandbadvertices}) specifies $d$ on a superset of $Q$ (which is the set of $A$'s queries) and $A$ is deterministic, the output of $A^d$ has now been fixed even though $d$ is not fully specified yet. Let $p\in[n]\setminus B$ and $p^\prime\in B$ be such that $\{p,p^\prime\}$ contains the output of $A^d$. \begin{lemma}\label{notbadandnotconnectedwithalgorithmoutput} $$\left|\,\left([n]\setminus \left(B\cup\{p\}\right)\right) \cap \left\{x\in[n]\mid (p,x)\notin Q\right\}\,\right| \ge n- \left|\,B\cup\{p\}\,\right|-\epsilon n.$$ \end{lemma} \begin{proof} Eq.~(\ref{setofbadvertices}) and $p\notin B$ imply $\text{deg}_G(p)<\epsilon n$, i.e., $|\,\{x\in[n]\mid(p,x)\in Q\}\,|<\epsilon n$. \end{proof} \comment \begin{eqnarray} \hat{z}\equiv\left\{ \begin{array}[ll] \mathop{argmin}_{y\in([n]\setminus B)\cap\{x\in[n]\mid (x,z)\notin Q\}}\, \text{deg}_G(y),& \text{if }z\notin B;\\ \mathop{argmin}_{y\in[n]\setminus B}\, \text{deg}_G(y),& \text{otherwise}. \end{array} \right. \label{nearoptimalpoint} \end{eqnarray} Take \begin{eqnarray} \hat{p}\in \left([n]\setminus \left(B\cup\{p\}\right)\right) \cap\left\{x\in[n]\mid \left(x,p\right)\notin Q\right\} \label{pickingourpoint} \end{eqnarray} arbitrarily, as can be done by Lemma~\ref{notbadandnotconnectedwithalgorithmoutput} and Eq.~(\ref{numberofremainingpointstobemadegood}). Trivially, $\hat{p}\notin B$. We now complete the specification of $d$. For all $(x,y)\in [n]^2\setminus (Q\cup \{(x,x)\mid x\in[n]\})$ with $x\notin B$ and $y\notin B$,\footnote{These are precisely the pairs whose $d$-distances are not specified by Eqs.~(\ref{zerodistancestoself})--(\ref{distancesonquerysetandbadvertices}).} \begin{eqnarray} d(x,y)\equiv\left\{ \begin{array}{ll} 3,& \text{if } ((x=\hat{p})\land (y= p))\text{ or }((y=\hat{p})\land (x=p));\\ 1, & \text{if }((x=\hat{p})\land (y\neq p))\text{ or }((y=\hat{p})\land (x\neq p));\\ 4, &\text{if }((x=p)\land(y\neq \hat{p}))\text{ or }((y=p)\land(x\neq\hat{p}));\\ 2, &\text{otherwise}. \end{array} \right. \label{makingourpointgoodandalgorithmpointbad} \end{eqnarray} The four cases in Eq.~(\ref{makingourpointgoodandalgorithmpointbad}) are mutually exclusive because $p\neq \hat{p}$ by Eq.~(\ref{pickingourpoint}). Clearly, Eq.~(\ref{makingourpointgoodandalgorithmpointbad}) does not assign different values to $d(x,y)$ and $d(y,x)$. The following lemma is straightforward from Eqs.~(\ref{zerodistancestoself})--(\ref{distancesonquerysetandbadvertices})~and~(\ref{makingourpointgoodandalgorithmpointbad}). \begin{lemma}\label{rangeofdistances} For all $x$, $y\in[n]$, $d(x,y)\in\{0,1,2,3,4\}$. \end{lemma} \begin{lemma}\label{speciallydesignedpointisgood} $$\sum_{y\in[n]}\,d\left(\hat{p},y\right)\le \left(1+4\epsilon\right)n+o(n).$$ \end{lemma} \begin{proof} By Lemmas~\ref{numberofbadvertices}~and~\ref{rangeofdistances}, \begin{eqnarray} \sum_{y\in B}\,d\left(\hat{p},y\right)=o(n). \label{distancesfromourpointtobad} \end{eqnarray} Furthermore, \begin{eqnarray} \sum_{y\in[n]\text{ s.t.\ }(\hat{p},y)\in Q}\,d\left(\hat{p},y\right) \stackrel{\text{Lemma~\ref{rangeofdistances}}}{\le} \sum_{y\in[n]\text{ s.t.\ }(\hat{p},y)\in Q}\,4 = 4\,\text{deg}_G\left(\hat{p}\right)<4\epsilon n, \label{distancesfromoutpointtoqueried} \end{eqnarray} where the last inequality follows from Eq.~(\ref{setofbadvertices}) and $\hat{p}\notin B$. We have \begin{eqnarray} \sum_{y\in[n]\setminus(B\cup\{p,\hat{p}\})\text{ s.t.\ } (\hat{p},y)\notin Q}\,d\left(\hat{p},y\right) \le n \label{distancesfromourpointtononbadnonqueried} \end{eqnarray} because all summands are $1$ by Eq.~(\ref{makingourpointgoodandalgorithmpointbad}) and $\hat{p}\notin B$. By Lemma~\ref{rangeofdistances}, \begin{eqnarray} \sum_{y\in\{p,\hat{p}\}}\,d\left(\hat{p},y\right)=O(1). \label{thetrivialdistance} \end{eqnarray} Summing up Eqs.~(\ref{distancesfromourpointtobad})--(\ref{thetrivialdistance}) completes the proof. \end{proof} \begin{lemma}\label{outputpointisterribleifnotbad} $$\sum_{y\in[n]}\,d\left(p,y\right)\ge 4\left(n-o(n)-\epsilon n\right).$$ \end{lemma} \begin{proof} Recall that $p\notin B$. We have \begin{eqnarray*} &&\sum_{y\in[n]}\,d\left(p,y\right)\\ &\ge& \sum_{y\in[n]\setminus (B\cup\{p,\hat{p}\}) \text{ s.t.\ }(p,y)\notin Q}\,d\left(p,y\right)\\ &\stackrel{\text{Eq.~(\ref{makingourpointgoodandalgorithmpointbad})}}{=}& \sum_{y\in[n]\setminus (B\cup\{p,\hat{p}\}) \text{ s.t.\ }(p,y)\notin Q}\,4\\ &\ge& 4\left( \left|\, \left\{y\in[n]\setminus \left(B\cup\{p\}\right)\mid \left(p,y\right) \notin Q\right\} \,\right|-1\right)\\ &\stackrel{\text{Lemma~\ref{notbadandnotconnectedwithalgorithmoutput}}}{\ge}& 4\left(n-|\,B\,|-\epsilon n-2\right)\\ &\stackrel{\text{Lemma~\ref{numberofbadvertices}}}{=}& 4\left(n-o(n)-\epsilon n\right). \end{eqnarray*} \end{proof} The next lemma is immediate from Eq.~(\ref{distancesonquerysetandbadvertices}) and $p^\prime\in B$. \begin{lemma}\label{outputpointisterribleifbad} $$\sum_{y\in[n]\setminus\{p^\prime\}}\, d\left(p^\prime,y\right)= 4\left(n-1\right).$$ \end{lemma} We proceed to prove that $([n],d)$ is a metric space through a few lemmas. The following lemma is immediate from Eqs.~(\ref{distancesonquerysetandbadvertices})~and~(\ref{makingourpointgoodandalgorithmpointbad}). \begin{lemma}\label{onlybadandalgorithmoutputcanhavedistance4} For all $x,$ $y\in[n]$, if $d(x,y)=4$, then $\{x,y\}\cap (B\cup \{p\})\neq\emptyset$. \end{lemma} Below is a consequence of $p\notin B$ and Eqs.~(\ref{pickingourpoint})--(\ref{makingourpointgoodandalgorithmpointbad}). \begin{lemma}\label{distancebetweennonbadoutputandourdesignedpoint} $d(\hat{p},p)=3$. \end{lemma} \begin{lemma}\label{constructeddistanceismetric} $([n],d)$ is a metric space. \end{lemma} \begin{proof} We only need to prove the triangle inequality for $d$ because all the other axioms are easy to verify. Consider the following cases for all distinct $x$, $y$, $z\in[n]$: \begin{itemize} \item $d(x,y)=1$, $d(x,z)=1$ and $d(y,z)=4$. By Lemma~\ref{sourceofshortdistances}, $x=\hat{p}$. Hence if $y=p$ (resp., $z=p$), then $d(x,y)=3$ (resp., $d(x,z)=3$) by Lemma~\ref{distancebetweennonbadoutputandourdesignedpoint}, a contradiction. Therefore, $p\notin\{y,z\}$, which together with Lemma~\ref{onlybadandalgorithmoutputcanhavedistance4} forces $\{y,z\}\cap B\neq\emptyset$. But if $y\in B$ (resp., $z\in B$), then $d(x,y)=4$ (resp., $d(x,z)=4$) by Eq.~(\ref{distancesonquerysetandbadvertices}), a contradiction. \item $d(x,y)=1$, $d(x,z)=1$ and $d(y,z)=3$. By Lemma~\ref{sourceofshortdistances}, $x=\hat{p}$. On the other hand, $d(y,z)=3$ means $(y,z)\in\{(\hat{p},p),(p,\hat{p})\}$ by Eq.~(\ref{makingourpointgoodandalgorithmpointbad}) (which is the only equation that may set distances to $3$), contradicting $x=\hat{p}$. \item $d(x,y)=1$, $d(x,z)=2$ and $d(y,z)=4$. By Lemma~\ref{onlybadandalgorithmoutputcanhavedistance4}, $\{y,z\}\cap(B\cup\{p\})\neq\emptyset$. But if $y\in B$ (resp., $z\in B$), then $d(x,y)=4$ (resp., $d(x,z)=4$) by Eq.~(\ref{distancesonquerysetandbadvertices}), a contradiction. Therefore, $p\in\{y,z\}$. Furthermore, $\hat{p}\in\{x,y\}$ by Lemma~\ref{onlysourceofdistance1}. Consequently, $(p,\hat{p})\in\{(x,y),(x,z),(y,z)\}$ (note that $p\neq \hat{p}$ by Eq.~(\ref{pickingourpoint})), implying $3\in\{d(x,y),d(x,z),d(y,z)\}$ by Lemma~\ref{distancebetweennonbadoutputandourdesignedpoint}, a contradiction. \end{itemize} We have excluded all possibilities of $d(x,y)+d(x,z)<d(y,z)$, where $x$, $y$, $z\in[n]$. \end{proof} Combining Lemmas~\ref{speciallydesignedpointisgood}--\ref{outputpointisterribleifbad},~\ref{constructeddistanceismetric} and that $\{p,p^\prime\}$ contains the output of $A^d$ yields our main theorem. \begin{theorem}\label{maintheorem} {\sc Metric $1$-median} has no deterministic nonadaptive $o(n^2)$-query $(4-\epsilon)$-approximation algorithms for any constant $\epsilon>0$. \end{theorem} Theorem~\ref{maintheorem} shows that the approximation ratio of $4$ in Theorem~\ref{nonadaptiveupperbound} cannot be improved to any constant $c<4$. \comment For all $y\in[n]\setminus B$ with $(\hat{z},y)\notin Q$, \begin{eqnarray} d\left(\hat{z},y\right)\equiv\left\{ \begin{array}[ll] 3,& \text{if }y=z\text{ and } z\notin B;\\ 1, &\text{otherwise}. \end{array} \right. \label{distancesofnearoptimalpoint} \end{eqnarray} Clearly, this and Eq.~(\ref{distancesonquerysetandbadvertices}) uniquely determines $d(\hat{z},y)$ for all $y\in [n]$. To complete specifying $d$, for all $(x,y)\in [n]^2\setminus Q$ with $x\notin B$, $y\notin B$ \begin{eqnarray} d(x,y)\equiv\left\{ \begin{array}[ll] 4,& \text{if }x=z \text{ or }y=z;\\ 2,& \text{otherwise}. \end{array} \right. \end{eqnarray} For a metric space $([n],d)$ and a set $Q\subseteq [n]^2$ of unordered pairs, let $G_Q=([n],Q)$ be the undirected graph with vertex set $[n]$ and edge set $Q$. Assign to each edge $(x,y)$ of $G_Q$ the length $d(x,y)$. For $x$, $y\in[n]$, denote by $d_Q(x,y)$ the shortest-path distance between $x$ and $y$ in $G_Q$. Clearly, $d_Q(x,y)\ge d(x,y)$ for all $x$, $y\in[n]$. For a finite set $D$ and a function $f\colon D\to \mathbb{R}$, the $\ell_1$ norm of $f$ is $\lVert f\rVert_1=\sum_{x\in D}\,|\,f(x)\,|$. The following corollary investigates, with respect to the normalized $\ell_1$ norm, the inapproximability of metrics by small sets of distances. \begin{corollary} There do not exist sets $Q\subseteq[n]^2$ of unordered pairs satisfying \begin{eqnarray} |\,Q\,|&=&o\left(n^2\right),\label{numberofqueries}\\ \frac{\lVert d_Q-d\rVert_1}{\lVert d\rVert_1}&\le& 1-\Omega(1)\label{recoveryerror} \end{eqnarray} for all metric spaces $([n],d)$. \end{corollary} \begin{proof} Suppose for contradiction that $Q\subseteq[n]^2$ satisfies Eqs.~(\ref{numberofqueries})--(\ref{recoveryerror}). Let \begin{eqnarray} \tilde{z}&=&\mathop{\rm argmin}_{z\in[n]}\, \sum_{x\in[n]}\,d_Q\left(z,x\right), \label{optimalsolutiononpseudodistances}\\ z^*&=&\mathop{\rm argmin}_{z\in[n]}\,\sum_{x\in[n]}\,d\left(z,x\right). \nonumber \end{eqnarray} So $z^*$ is the optimal solution to {\sc Metric $1$-median} with respect to $([n],d)$. Now, \begin{eqnarray} &&\sum_{x\in[n]}\,d\left(\tilde{z},x\right)\label{qualityofpseudo1median}\\ &\le& \sum_{x\in[n]}\,d_Q\left(\tilde{z},x\right)\nonumber\\ &\stackrel{\text{Eq.~(\ref{optimalsolutiononpseudodistances})}}{\le}& \frac{1}{n}\sum_{z\in[n]}\,\sum_{x\in[n]}\,d_Q\left(z,x\right)\nonumber\\ &=&\frac{1}{n}\left\lVert d_Q\right\rVert_1\nonumber\\ &\stackrel{\text{Eq.~(\ref{recoveryerror})}}{\le}& \frac{2-\Omega(1)}{n} \left\lVert d\right\rVert_1\nonumber\\ &=&\frac{2-\Omega(1)}{n}\sum_{x,y\in[n]}\,d\left(x,y\right)\nonumber\\ &\le&\frac{2-\Omega(1)}{n}\sum_{x,y\in[n]}\,\left(d\left(z^*,x\right)+d\left(z^*,y\right)\right) \nonumber\\ &=&\left(2-\Omega(1)\right)\cdot 2\sum_{x\in[n]}\,d\left(z^*,x\right).\label{optimalvalue4times} \end{eqnarray} By Eq.~(\ref{optimalsolutiononpseudodistances}), we may find $\tilde{z}$ with $|\,Q\,|=o(n^2)$ queries, which together with Eqs.~(\ref{qualityofpseudo1median})--(\ref{optimalvalue4times}) contradict Theorem~\ref{maintheorem}. \end{proof} \comment \section{Additional section --- to be modified} This section modifies XXX slightly to XXX. \begin{figure} \begin{algorithmic}[1] \FOR{each $(q,r)\in S$} \FOR{each $(q^\prime,r^\prime)\in S$} \IF{$q$, $q^\prime \le \lfloor(n-1)/m\rfloor-1$} \STATE Query for $d(q m+r,q^\prime m+r)$; \STATE Query for $d(q^\prime m+r,q^\prime m+r^\prime)$; \STATE $\tilde{d}(q m+r,q^\prime m+r^\prime)\leftarrow d(q m+r,q^\prime m+r)+d(q^\prime m+r,q^\prime m+r^\prime)$; \ELSE \STATE Query for $d(q m+r,q^\prime m+r^\prime)$; \STATE $\tilde{d}(q m+r,q^\prime m+r^\prime)\leftarrow d(q m+r,q^\prime m+r^\prime)$; \ENDIF \ENDFOR \ENDFOR \STATE $(\hat{q},\hat{r})\leftarrow\mathop{\rm argmin}_{(q,r)\in S} \sum_{(q^\prime,r^\prime)\in S}\, {\tilde{d}}^2(q m+r,q^\prime m+r^\prime)$, breaking ties arbitrarily; \STATE Output $\hat{q} m+\hat{r}$; \end{algorithmic} \caption{Algorithm {\sf Approx-Centroid}.} \label{deterministic16approximationalgorithm} \end{figure} For all $(q,r)$, $(q^\prime,r^\prime)\in S$ and $x\in\{0,1,\ldots,n-1\}$, define \begin{eqnarray} f\left(q,r,q^\prime,x\right) \equiv \left\{ \begin{array}{ll} d(x,q^\prime m+r), & \text{if $q$, $q^\prime\le\lfloor(n-1)/m\rfloor-1$;}\\ 0, & \text{otherwise.} \end{array} \right. .\label{additionalterm} \end{eqnarray} The same definition is made by Chang~\cite{Cha13}. \begin{fact}[{\cite[Lemma~2]{Cha13}}]\label{pseudodistanceupper} For all $(q,r)$, $(q^\prime,r^\prime)\in S$ and $x\in\{0,1,\ldots,n-1\}$, $$ \tilde{d}\left(qm+r,q^\prime m+r^\prime\right) \le d\left(x,qm+r\right)+d\left(x,q^\prime m+r^\prime\right)+2f\left(q,r,q^\prime,x\right) $$ after finishing the loop in lines~1--12 of {\sf Approx-Centroid}. \end{fact} \begin{fact}[{\cite[Lemma~4]{Cha13}}] For all $(q,r)$, $(q^\prime,r^\prime)\in S$, $$ d\left(qm+r,q^\prime m+r^\prime\right) \le \tilde{d}\left(qm+r,q^\prime m+r^\prime\right) $$ after finishing the loop in lines~1--12 of {\sf Approx-Centroid}. \end{fact} In the following two lemmas, $(q,r)$ and $(q^\prime,r^\prime)$ are independent and uniformly random elements in $S$. \begin{lemma}\label{distancessquareexpected} For all $x\in\{0,1,\ldots,n-1\}$, \begin{eqnarray*} \mathop{\rm E}\left[\,{\tilde{d}}^2\left(qm+r,q^\prime m+r^\prime\right)\,\right] \le 8\cdot\mathop{\rm E}\left[\,d^2\left(x,qm+r\right)\,\right] +8\cdot\mathop{\rm E}\left[\,f^2\left(q,r,q^\prime,x\right)\,\right]. \end{eqnarray*} \end{lemma} \begin{proof} By Fact~\ref{pseudodistanceupper}, \begin{eqnarray*} &&\mathop{\rm E}\left[\,{\tilde{d}}^2\left(qm+r,q^\prime m+r^\prime\right)\,\right]\\ &\le& \mathop{\rm E}\left[\,\left( d\left(x,qm+r\right)+d\left(x,q^\prime m+r^\prime\right) +f\left(q,r,q^\prime,x\right)+f\left(q,r,q^\prime,x\right) \right)^2\,\right]\\ &\le& 4\cdot \mathop{\rm E}\left[\, d^2\left(x,qm+r\right)+d^2\left(x,q^\prime m+r^\prime\right) +f^2\left(q,r,q^\prime,x\right)+f^2\left(q,r,q^\prime,x\right) \,\right]\\ &=&8\cdot\mathop{\rm E}\left[\,d^2\left(x,qm+r\right)\,\right] +8\cdot\mathop{\rm E}\left[\,f^2\left(q,r,q^\prime,x\right)\,\right], \end{eqnarray*} where the second inequality follows from Cauchy's inequality. \end{proof} \begin{lemma}\label{fsquarelemma} For all $x\in\{0,1,\ldots,n-1\}$, \begin{eqnarray*} \mathop{\rm E}\left[\,f^2\left(q,r,q^\prime,x\right)\,\right] \le \mathop{\rm E}\left[\,d^2\left(x,q^\prime m+r^\prime\right)\,\right]. \end{eqnarray*} \end{lemma} \begin{proof} By Eq.~(\ref{additionalterm}), {\small \begin{eqnarray} \mathop{\rm E}\left[\,f^2\left(q,r,q^\prime,x\right)\,\right] = \Pr\left[\,q,q^\prime\le \left\lfloor\frac{n-1}{m}\right\rfloor-1\,\right] \cdot \mathop{\rm E}\left[\,d^2\left(x,q^\prime m+r\right)\mid q,q^\prime\le \left\lfloor\frac{n-1}{m}\right\rfloor-1\,\right]. \label{fsquare} \end{eqnarray} Observe that, conditional on any realization of $q$ and $q^\prime$ with $q$, $q^\prime\in \lfloor(n-1)/m\rfloor-1$, both $r$ and $r^\prime$ are uniformly distributed over $\{0,1,\ldots,m-1\}$. Therefore, {\smal \begin{eqnarray*} \mathop{\rm E}\left[\,d^2\left(x,q^\prime m+r\right)\mid q,q^\prime\le \left\lfloor\frac{n-1}{m}\right\rfloor-1\,\right] = \mathop{\rm E}\left[\,d^2\left(x,q^\prime m+r^\prime\right)\mid q,q^\prime\le \left\lfloor\frac{n-1}{m}\right\rfloor-1\,\right]. \end{eqnarray*} This and inequality~(\ref{fsquare}) imply \begin{eqnarray*} &&\mathop{\rm E}\left[\,f^2\left(q,r,q^\prime,x\right)\,\right]\nonumber\\ &=& \Pr\left[\,q,q^\prime\le \left\lfloor\frac{n-1}{m}\right\rfloor-1\,\right] \cdot \mathop{\rm E}\left[\,d^2\left(x,q^\prime m+r^\prime\right)\mid q,q^\prime\le \left\lfloor\frac{n-1}{m}\right\rfloor-1\,\right]\\ &\le& \mathop{\rm E}\left[\,d^2\left(x,q^\prime m+r^\prime\right)\,\right]. \end{eqnarray*} \end{proof} Below is a consequence of Lemmas~\ref{distancessquareexpected}--\ref{fsquarelemma}. \begin{lemma}\label{therecomestheratioof16} For all $x\in\{0,1,\ldots,n-1\}$, \begin{eqnarray*} \mathop{\rm E}\left[\,{\tilde{d}}^2\left(qm+r,q^\prime m+r^\prime\right)\,\right] \le 16\cdot \mathop{\rm E}\left[\,d^2\left(x,qm+r\right)\,\right]. \end{eqnarray*} \end{lemma} \begin{theorem} {\sc Metric $1$-median} has a deterministic $O(n^{3/2})$-query $16$-approximation algorithm. \end{theorem} \begin{proof} By line~13 of {\sf Approx-Centroid}, \begin{eqnarray*} \sum_{(q^\prime,r^\prime)\in S}\,{\tilde{d}}^2\left(\hat{q}m+\hat{r},q^\prime m+r^\prime\right) \le \frac{1}{n}\cdot \sum_{(q,r)\in S}\,\sum_{(q^\prime,r^\prime)\in S}\, {\tilde{d}}^2\left(qm+r,q^\prime m+r^\prime\right). \end{eqnarray*} \comment Equivalently, \begin{eqnarray*} \mathop{\rm E}\left[\,{\tilde{d}}^2\left(\hat{q}m+\hat{r},q^\prime m+r^\prime\right)\,\right] \le \mathop{\rm E}\left[\,{\tilde{d}}^2\left(qm+r,q^\prime m+r^\prime\right)\,\right]. \end{eqnarray*} By Lemma~\ref{therecomestheratioof16}, \begin{eqnarray*} \mathop{\rm E}\left[\,{\tilde{d}}^2\left(qm+r,q^\prime m+r^\prime\right)\,\right] \le \min_{x=0}^{n-1}\, \end{eqnarray*} \end{proof} \section{Something new} For a metric space $([n],d)$, uniformly random points $\bs{u}$, $\bs{v}$ in $[n]$ and the output $z$ of Indyk's algorithm given oracle access to $([n],d)$, \begin{eqnarray*} &&\mathop{\rm E}\left[\,\left|\,\frac{1}{2}\left(d\left(\bs{u},z\right)+d\left(\bs{v},z\right)\right)-d\left(\bs{u},\bs{v}\right)\,\right|\,\right]\\ &\le& \mathop{\rm E}\left[\,\left|\,\frac{1}{2}d\left(\bs{u},z\right)-\frac{1}{2}d\left(\bs{u},\bs{v}\right)\,\right|\,\right] +\mathop{\rm E}\left[\,\left|\,\frac{1}{2}d\left(\bs{v},z\right)-\frac{1}{2}d\left(\bs{u},\bs{v}\right)\,\right|\,\right]\\ &=& \mathop{\rm E}\left[\,\left|\,d\left(\bs{u},z\right)-d\left(\bs{u},\bs{v}\right)\,\right|\,\right]\\ &\le&\mathop{\rm E}\left[\,d\left(z,\bs{v}\right)\,\right]. \end{eqnarray*} That is, writing $\tilde{d}(x,y)\equiv (d(x,z)+d(y,z))/2$ for all $x$, $y\in[n]$, $$\left\|\,\tilde{d}-d\,\right\|_1 \le \mathop{\rm E}\left[\,d\left(z,\bs{v}\right)\,\right].$$ This and the easily verifiable fact that $\tilde{d}$ is a metric on $[n]$ show how to recover $d$ in $O(n)$ time with a bounded $\ell_1$ error. \bibliographystyle{plain}
1401.2695
\section{Introduction} Self-assembled quantum dots (QDs), also known as artificial atoms, are made of millions of atoms. \cite{bimberg_book} Unlike real atoms, which have constant physical properties, the QDs are different from each other and show far more complicate behavior. The physical properties of QDs, are determined by the combined effects of the strain distributions, alloy, interface etc., which are coined during their growing process. It is a great challenge, as well as an opportunity, to tune the QDs to desired properties (e.g., the exciton energy, polarization, and fine structure splitting etc.) by external fields, which is not only interesting for fundamental physics, but is also extremely important for device applications. However, despite the importance, our understanding to the interplay between the QDs and external fields is still very limited. One of the most prominent applications of QDs is as the entangled photon emitters, based on the biexciton cascade process, \cite{benson00,stevenson06} which has attracted enormous interest in the last decade. However, though it is simple in principle, it is not easy to implement experimentally. This is because of the existence of in-plane anisotropy in the QDs, the two biexciton decay pathways may have a small energy difference known as the fine structure splitting (FSS). When the FSS exceeds the radiative linewidth ($\sim$ 1.0 $\mu$eV), the polarization entanglement will be destroyed.\cite{stevenson06,gong08} Great effort has been made to reduce the FSS using various post-growth tuning techniques.\cite{bennett10,gerardot07,vogel07,trotta12,ding10,jons11,seidl06,dou08,gong11} Especially, it has recently been demonstrated that the FSS can be universally suppressed through the combination of electric field and stresses,\cite{wang12,trotta12} regardless of the dots' details. In previous works, we have developed an effective model~\cite{gong11,wang12} based on symmetry analysis to explain how the FSS change under external stress. It turns out that the results obtained from the simple effective model is in excellent agreement with those obtained from a more sophisticated empirical pseudopotential method (EPM) \cite{williamson00} and configuration interaction (CI) calculations \cite{Franceschetti1999} and as well as the experimental results. \cite{plumhof11,kuklewicz12,trotta12} However, there is a hug gap between the effective model and the EPM calculations in understanding how exactly the strain modify the exciton coupling in the QDs at microscopic level. The purpose of this paper is to bridge the gap between the effective model and the pseudopotential calculations. We derive analytically the exciton FSS under the external stresses in self-assembled InAs/GaAs QDs using the Bir-Pikus model.\cite{pikus} We show that the strain induced valence bands mixing and valence-conduction bands (VB-CB) coupling play the most important roles in tuning the FSS. Detailed comparison between the Bir-Pikus model and the EPM calculations shows that the simple Bir-Pikus model provides semi-quantitatively description of the FSS under strain. We further clarify the polarization angle change under the external stresses. The rest of paper is organized as follows. In Sec. \ref{sec:single-particle}, we discuss how the single-particle states of a QD vary under the external stresses using the Bir-Pikus model. We discuss how the electron-hole exchange integrals and FSS change under the external stresses in Sec. \ref{sec:FSS}, and the exciton polarizations in Sec. \ref{sec:polarization}. We summarize in Sec. \ref{sec:summary}. \section{Single-particle states in a QD under external strain} \label{sec:single-particle} We first look at how the single particle states in a QD vary under external strain field. Usually the applied uniaxial stress is less than $\pm$ 100 MPa. Under such small stress, the shape of QDs changes very little. We therefore neglect the change of envelope functions of the single particle states, and focus on the underlying atomistic wave functions. We further assume that dots have uniformly distributed strain due to the lattice mismatch between the InAs dot and GaAs matrix, and neglect the interface effects for the moment. The influence of strain on valence states in zinc-blende structures can be described by the Bir-Pikus model.\cite{pikus} We expand the Bir-Pikus Hamiltonian with in the six $\ket{j,j_z}$ states, i.e., heavy hole (HH) $\ket{3/2,\pm 3/2}$ , light hole (LH) $\ket{3/2,\pm 1/2}$ and spin orbital (SO) $\ket{1/2,\pm 1/2}$ states, resulting in the following $6\times6$ matrix, \begin{equation}\label{eq:HBP} \left(\begin{array}{cccccc} P+Q & 0 & -\sqrt{2}S & R & S & \sqrt{2}R \\ 0 & P+Q & R^{\ast} & \sqrt{2}S^{\ast} & \sqrt{2}R^{\ast} & -S^{\ast} \\ -\sqrt{2}S^{\ast} & R & P-Q & 0 & \sqrt{2}Q & -\sqrt{3}S\\ R^{\ast} & \sqrt{2}S & 0 & P-Q & \sqrt{3}S^{\ast} & \sqrt{2}Q \\ S^{\ast} & \sqrt{2}R & \sqrt{2}Q & \sqrt{3}S & P & 0 \\ \sqrt{2}R^{\ast} & -S & -\sqrt{3}S^{\ast} & \sqrt{2}Q & 0 & P \end{array}\right)\, , \end{equation} where, \begin{eqnarray} P &=& a_{v}(e_{xx}+e_{yy}+e_{zz}) \, ,\\ \nonumber Q &=& \frac{b_v}{2}(e_{xx}+e_{yy}-2e_{zz}) \, , \\ \nonumber R &=& \frac{\sqrt{3}}{2}b_v(e_{xx}-e_{yy})-id_v e_{xy}\, ,\label{eq:BP-R} \\ \nonumber S &=& \frac{d_v}{\sqrt{2}}(e_{zx}-ie_{yz})\, . \label{eq:PQRS} \end{eqnarray} $a_{v}$, $b_{v}$, and $d_{v}$ are the isotropic, biaxial, and shear deformation potentials respectively and $e_{ij}$ are the strain components in the QDs. $P$ describes the effects of isotropic hydrostatic strain and $Q$ is associated with the biaxial strain. The effects of in-plane and off-plane strain anisotropy are accounted by $R$ and $S$. In self-assembled InAs/GaAs QDs grown on the (001) GaAs substrate, the dot material is compressed in the growth plane and distended in the growth direction. We also consider the effects of strain anisotropy in the growth plane ($e_{xy}$, $e_{xx}-e_{yy}$) and off-plane shear strains ($e_{zx}$ and $e_{yz}$). For most III-V semiconductors, the SO bands are several hundreds meV below the HH and LH bands. The SO band were ignored in many previous works.\cite{Leger2007,Testelin2009,Tonin2012} However, for self-assembled InGaAs/GaAs QDs and other nano-structures with large lattice mismatch, the biaxial strain is very large, which push the LH bands down towards the SO bands, therefore the coupling to the SO band is also important, as will be demonstrated in this work. Therefore, the full Hamiltonian should also includes the SO coupling term, \begin{equation} H_{\mathrm{SO}}=\frac{\Delta}{3}\left(\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -2& 0 \\ 0 & 0 & 0 & 0 & 0 & -2 \end{array}\right). \end{equation} Here, $\Delta \sim$ 390 meV is the SO parameter in InAs. The total Hamiltonian is given by $H=H_{\mathrm{BP}}+H_{\mathrm{SO}}$. Because of the large lattice mismatch (7\%) between InAs and GaAs in the InAs/GaAs QDs, the biaxial strain ($|e_{xx}+e_{yy}-2e_{zz}| \sim$ 24\%) is much larger than the shear strains ($|e_{xx}-e_{yy}|\sim|e_{yz}|\sim|e_{zx}|\sim$ 1 \%, $|e_{xy}|\sim0.5$\%). As a consequence, $Q$ is comparable with the SO parameter $\Delta$, much larger than $|R|$ and $|S|$. Therefore, we treat $R$ and $S$ as perturbations in the Hamiltonian. We calculate the first two (degenerate) hole states states up to second order of $R$ and $S$, \begin{eqnarray} \mathcal{N}\ket{\psi^{v}_{-}} = \ket{\frac{3}{2},+\frac{3}{2}} &+\chi_{\alpha}\ket{\frac{3}{2},+\frac{1}{2}} +\chi_{\beta}\ket{\frac{1}{2},+\frac{1}{2}} \nonumber\\ &+\varepsilon_{\alpha}\ket{\frac{3}{2},-\frac{1}{2}} +\varepsilon_{\beta}\ket{\frac{1}{2},-\frac{1}{2}}, \nonumber \\ \mathcal{N}\ket{\psi^{v}_{+}}= \ket{\frac{3}{2},-\frac{3}{2}} &+\varepsilon_{\alpha}^{\ast}\ket{\frac{3}{2},+\frac{1}{2}} +\varepsilon_{\beta}^{\ast}\ket{\frac{1}{2},+\frac{1}{2}} \nonumber\\&-\chi_{\alpha}^{\ast}\ket{\frac{3}{2},-\frac{1}{2}} -\chi_{\beta}^{\ast}\ket{\frac{1}{2},-\frac{1}{2}}, \end{eqnarray} where \begin{eqnarray} \varepsilon_{\alpha}&=&\frac{(3Q+\Delta)R^{\ast}+\sqrt{3}(S^{\ast})^2}{2Q\Delta} \, ,\nonumber \\ \varepsilon_{\beta}&=&\frac{3\sqrt{2}Q R^{\ast}+\sqrt{6}(S^{\ast})^2}{2Q\Delta} \, ,\nonumber\\ \chi_{\alpha}&=&\frac{-\sqrt{2}\Delta S^{\ast}-\sqrt{6}SR^{\ast}}{2Q\Delta} \, ,\nonumber\\ \chi_{\beta}&=&\frac{\sqrt{3}SR^{\ast}}{2Q\Delta}\, , \label{eq:mixing} \end{eqnarray} and $\mathcal{N}^2=1+|\varepsilon_{\alpha}|^2+|\varepsilon_{\beta}|^2+|\chi_{\alpha}|^2+|\chi_{\beta}|^2 \sim$ 1 is the normalization factor. The single-particle energy of the two states is, \begin{eqnarray} E_{v} &=& P+Q+{\Delta \over 3} +{2\Delta|S|^2+(9Q+\Delta)|R|^2 \over 2Q\Delta} \\ \nonumber && +\frac{3\sqrt{3}(S^2R^{\ast}+(S^{\ast})^2R)}{2Q\Delta}\, . \end{eqnarray} These two states are still dominated by the HH ($j=3/2$) states but mixing up with some LH and SO components. There are are two mixing mechanisms: (i) The mixing between HH and $j_{z}$ states of the opposite signs is mainly due to the in-plane anisotropic strain effects or shape asymmetry ($R$). The mixing amplitude is determined by $\varepsilon_{\alpha}$ and $\varepsilon_{\beta}$. (ii) The mixing between HH and $j_{z}$ states with the same sign is mainly due to the off-plane shear strain components ($S$). The mixing amplitude is determined by $\chi_{\alpha}$ and $\chi_{\beta}$. Both mixing mechanisms have important influence on the optical properties of the QDs, which will be discussed later in the paper. For the simplicity of the discussion, we ignore the VB-CB coupling for a moment. But we will see later that the VB-CB coupling is also important for the FSS change under stain, which is addressed in the appendix. The Bloch parts of the conduction states are dominated by the lowest electron bands, $\ket{\psi^{c}_{+}}=\ket{e\uparrow}$ and $\ket{\psi^{v}_{-}}=\ket{e\downarrow}$. The energy of conduction states merely depends on hydrostatic strain: \begin{equation} \delta E_c (\tensor{e}) = a_c (e_{xx}+e_{yy}+e_{zz}). \end{equation} Under external stresses, the strain distribution in QDs changes accordingly, which changes the single-particle energy levels, as well as the coupling between the HH, LH and SO bands. We take the uniaxial stress along the [110] direction for example. The relation between the change of strain and stress $p$ along the [110] direction is given by, \begin{equation}\begin{split} \Delta e_{xx} & = \Delta e_{yy} = -\frac{1}{2} (S_{11}+S_{12}) p \, ,\\ \Delta e_{zz} & = -2S_{12} p\, , \\ \Delta e_{xy} & = -\frac{1}{4} S_{44} p \, ,\\ \Delta e_{zx} & = \Delta e_{yz} = 0\, . \end{split} \end{equation} Here, we take the compressive stress as positive one, and the parameters $P$, $Q$, $R$, $S$ in the Bir-Pikus Hamiltonian under stress along the [110] direction can be written as, \begin{eqnarray} P(p) & =& P(0) - a_{v} (S_{11}+2 S_{12}) p \, , \nonumber \\ Q(p) & =& Q(0) - \frac{1}{2} b_{v} (S_{11}-S_{12}) p \, \nonumber \\ R(p) & =& R(0) + i \frac{1}{4} d_{v} S_{44} p \, \nonumber \\ S(p) & =& S(0) \, . \label{eq:pqrs} \end{eqnarray} Interestingly, $S$ does not change with the stress along the [110] direction. We have \begin{equation} \frac{d E_v}{d p} \approx -a_{v} (S_{11}+2 S_{12})-\frac{1}{2} b_{v} (S_{11}-S_{12}) \, , \end{equation} and \begin{equation} {dE_{c} \over d p}= -a_c (S_{11}+2S_{12}). \end{equation} Because the envelope functions of the electron and hole states change little, if the external stress is not very large, the direct electron-hole Coulomb interaction also change little. The change of the exciton energy is therefore mainly determined by the single-particle energies. We can estimate the of energy change to the stress along [110] direction as, \begin{equation} \frac{d E_{X^{0}} }{d p} \approx -(a_c - a_v)(S_{11}+2S_{12})+\frac{1}{2}b_{v} (S_{11} - S_{12})\, . \label{eq:exciton-energy} \end{equation} Using the deformation potential parameters for bulk InAs material and elastic compliance constants for bulk GaAs material listed in Table \ref{tab:param}, we obtain $d E_{X^{0}}/d p \approx$ 12.3 $\mu$eV/MPa. This value is in consistent with recent experimental results. \cite{kuklewicz12} Although the exciton energy can be tuned by the stress along [110] direction, the tuning slope is rather small, because of the cancellation effect between the conduction band and valence bands in Eq. \ref{eq:exciton-energy}. Furthermore, in QDs the confinement potentials, alloy effects, etc. may also plays important roles to the exciton emission energies, therefore, ${d E_{X^{0}} / d p}$ may vary from dots to dots. \cite{kuklewicz12} \begin{table}[htbp] \caption{\label{tab:param} The compliance constants for GaAs and and deformation potentials for InAs. The deformation potentials for strained InAs is calculated by EPM using isotropic and biaxial strains for typical QDs.} \centering \begin{tabular}{lcccc} \hline\hline Parameters & Unit & GaAs & InAs(bulk)\cite{Vurgaftman2001} & InAs (strained)\\ \hline $a_{c}$ & eV & -- & -5.08 & --\\ $a_{v}$ & eV & -- & -1.0 & -0.23\\ $b_{v}$ & eV & -- & -1.8 & -2.22\\ $d_{v}$ & eV & -- & -3.6 & -6.49\\ $\Delta$& eV & -- & 0.39 & 0.33\\ $S_{11}$ & 10$^{-2}$ GPa$^{-1}$ & 1.17 & &\\ $S_{12}$ & 10$^{-2}$ GPa$^{-1}$ & -0.37 & &\\ $S_{44}$ & 10$^{-2}$ GPa$^{-1}$ & 1.68 & &\\ \hline\hline \end{tabular} \end{table} \begin{figure} \includegraphics[width=3.4in]{./Mixing.eps} \caption{\label{fig:mixing} (Color online) Valence bands mixing of the first hole state in a pure lens-shaped InAs/GaAs dot with base $D$=25 nm and height $h$=3.5 nm under uniaxial stress along the [110] direction, showing (a) HH$_+$, (b) HH$_-$, (c) LH$_+$, (d) LH$_-$, (e) SO$_+$ and (f) SO$_-$ components. The red squires are calculated from EPM, whereas the blue lines are obtained from Bir-Pikus model.} \end{figure} \section{Electron-hole exchange interaction and FSS} \label{sec:FSS} In this section we discuss how the external strain modifies the exciton exchange energies and the FSS. The matrix elements of the exciton Hamiltonian between different spin configurations is written as,~\cite{Franceschetti1999} \begin{equation}\begin{split} \mathcal{H}_{v'c',vc}&=\bra{\Phi_{v'c'}}\mathcal{H}\ket{\Phi_{vc}} \\&=(E_c-E_v)\delta_{c,c'}\delta_{v,v'} -J_{v'c',vc}+K_{v'c',vc} \, , \end{split}\end{equation} where the $J$s and $K$s are the Coulomb and exchange integrals respectively. We consider only the first two hole states ($\psi^{v}_{+}$ and $\psi^{v}_{-}$) and electron states ($\psi^{c}_{+}$ and $\psi^{c}_{-}$). According to the symmetry, the only configurations with anti-parallel spins $(\psi^{v}_{-} \psi^{c}_{+}, \psi^{v}_{+} \psi^{c}_{-})$ make contributions to bright excitons (BE). In this basis, the many-particle Hamiltonian for bright excitons is \begin{equation} H_{\textrm{BE}}=(E_c-E_v)-J_{eh}+\begin{bmatrix} K_{\rm d} & K_{\rm od} \\ K^{\ast}_{\rm od} & K_{\rm d} \end{bmatrix}\, , \\ \label{eq:exciton} \end{equation} where $J_{eh}$ is the electron-hole Coulomb interaction, $K_{\rm d}$ is the diagonal exchange energy, which determines the dark-bright exciton energy splitting, whereas the off-diagonal exchange energy, \begin{equation}\begin{split} K_{\rm od} &= \bra{\psi^{v}_{-}\psi^{c}_{+}}\mathcal{K}_{\mathrm{ex}}\ket{\psi^{v}_{+}\psi^{c}_{-}}, \\ & = \iint\frac{[\psi_{+}^{v}(x_1)\psi_{+}^{c}(x_2)]^{\ast}\psi^{c}_{-}(x_1)\psi^{v}_{-}(x_2)} {\bar{\mathcal{\epsilon}}(\mathbf{r}_1,\mathbf{r}_2)|\mathbf{r}_1-\mathbf{r}_2|} dx_1dx_2 \, , \end{split}\end{equation} is responsible for the bright exciton energy splitting. After diagonalization of Eq.~\ref{eq:exciton}, the eigenstates of the two bright exciton can be written as, \begin{align} \ket{B_{1}}&= \frac{1}{\sqrt{2}}(\ket{\psi^{v}_{-}\psi^{c}_{+}} + \mathrm{e}^{i2\theta}\ket{\psi^{v}_{+}\psi^{c}_{-}}) \, ,\\ \ket{B_{2}}&= \frac{1}{\sqrt{2}}(\ket{\psi^{v}_{-}\psi^{c}_{+}} - \mathrm{e}^{i2\theta}\ket{\psi^{v}_{+}\psi^{c}_{-}})\, , \end{align} with 2$\theta=-\mathrm{arg}(K_{\rm od})$. The energy splitting between the two bright excitons, which is known as FSS, is given by $\Delta_{\mathrm{FSS}}=2|K_{\rm od}|$. Let $a=(X+iY)/\sqrt{2}$ and $b=(X-iY)/\sqrt{2}$, and since only the hole and electron components of opposite spins in the same configuration have none-zero contribution to the exchange integral, the exchange integral can be written as (drop the spin index), \begin{equation} K_{\rm od} ={\mathcal{N}}^{-1}\bra{(a +\varepsilon_+ b+\chi_- Z) e}\mathcal{K}_{ex} \ket{(b +\varepsilon_+^{\ast} a +\chi_-^{\ast} Z) e} \, , \end{equation} with \begin{align} \varepsilon_+&=\frac{\varepsilon_{\alpha}+\sqrt{2}\varepsilon_{\beta}}{\sqrt{3}} =\frac{R^{\ast}}{2\sqrt{3}}\left(\frac{1}{Q}+\frac{9}{\Delta}\right) +\frac{3(S^\ast)^2}{2Q\Delta} \, , \nonumber \\ \chi_-&=\frac{-\sqrt{2}\chi_{\alpha}+\chi_{\beta}}{\sqrt{3}} =\frac{S^{\ast}}{\sqrt{3}Q}+\frac{3SR^\ast}{2Q\Delta} \, . \label{eq:epsilon} \end{align} To simplify the notation, we introduce the following parameters, \begin{align} \bra{a e} \mathcal{K}_{ex} \ket{a e} \equiv\bra{b e} \mathcal{K}_{ex} \ket{b e} &= K , \nonumber\\ \bra{a e} \mathcal{K}_{ex} \ket{b e} &= \kappa+i\delta , \nonumber \\ \bra{a e} \mathcal{K}_{ex} \ket{Z e} \equiv\bra{Z e} \mathcal{K}_{ex} \ket{b e} &= \mu+i\nu ,\nonumber\\ \bra{Z e} \mathcal{K}_{ex} \ket{Z e} &= K_{z}\label{eq:integral}\, . \end{align} Each parameter appearing in Eq.~(\ref{eq:integral}) can be expressed as exchange integrals over different orbital functions ($X$, $Y$, $Z$, $S$). For simplicity, we choose all orbital functions to be real. Therefore the parameters introduced here are all real. Exchange integrals over heavy holes 2$K$ is approximately the dark-bright splitting, and \[ \kappa=\frac{1}{2}(\bra{Xe} \mathcal{K}_{ex} \ket{Xe} - \bra{Ye} \mathcal{K}_{ex} \ket{Ye}), \] comes from the non-equivalence the orbital wave functions $X$ and $Y$, whereas, \[ \delta=\frac{1}{2}(\bra{Xe} \mathcal{K}_{ex} \ket{Ye}+\bra{Ye} \mathcal{K}_{ex} \ket{Xe})\, , \] is due to the non-orthogonality between the orbital functions $X$, and $Y$. $\mu$ and $\nu$ is due to the non-orthogonality between the orbital functions $X$, $Y$ to $Z$, \begin{align} \mu&= \frac{1}{\sqrt{2}}\bra{Xe} \mathcal{K}_{ex} \ket{Ze} \, , \nonumber \\ \nu &= \frac{1}{\sqrt{2}}\bra{Ye} \mathcal{K}_{ex} \ket{Ze}\, . \end{align} With the above parameters, the whole exchange integral can be written as, \begin{equation} \begin{split} K_{\mathrm{od}}=& \frac{1}{\mathcal{N}} [ (\kappa+i\delta) +2\varepsilon_+ K +2\chi_- (\mu+i\nu) \\ &+2\varepsilon_+\chi_- (\mu-i\nu) +\varepsilon_+^2 (\kappa-i\delta) +\chi_-^2 K_{z} ]\, . \end{split} \label{eq:exchange} \end{equation} One can see clearly from Eq.~(\ref{eq:exchange}), the origin of the FSS from the microscopic structure in the self-assembled QDs, apart from the dot shape asymmetry, including the non-orthogonality and non-equivalence between the atomic orbitals and the band mixing: (i) For ideal QDs with $D_{2d}$ and $C_{4v}$ symmetry (e.g., a pure InAs/GaAs quantum disk), $e_{xx}-e_{yy}$=0, and $e_{xy}$, $e_{yz}$, $e_{zx}$=0, there is no coupling between HH with LH and SO bands, and $\kappa$, $\delta$ also vanish. There would be no FSS. (ii) For QDs with $C_{2v}$ symmetry (e.g., a pure lens-shaped InAs/GaAs QD), the orbital functions $X$ and $Y$ are of mirror symmetry about the [110] plane, therefore, $\kappa=0$ and $\mu=\nu$. The strain distribution also obey such mirror symmetry, i.e., $e_{xx}=e_{yy}$, $e_{xy}\neq 0$, $e_{zx}=e_{yz}\neq 0$, as a result, $\varepsilon_+=i|\varepsilon_+|$, $\chi_-=|\chi_-|e^{i\pi/4}$ [See Eq.~(\ref{eq:PQRS}) and Eq.~(\ref{eq:mixing})]. It is easy to verify that $K_{\mathrm{od}}$ is pure imaginary. (iii) For a real dot with $C_1$ symmetry, $K_{\mathrm{od}}$ has both a real part and an imaginary part. Because $|\varepsilon_+|$,$|\chi_-|\ll 1$, and $\kappa$, $\delta$, $\mu$, $\nu$ $\ll$ $K$, $K_{z}$, $K_{\mathrm{od}}$ can be further approximated as $K_{\mathrm{od}} \approx (\kappa+i\delta) + 2\varepsilon_+ K$. We assume that the parameters introduced in Eq.~(\ref{eq:integral}) associated with the atomistic orbitals of the underlying dot materials will not change under small external stress. Therefore the change of the exchange integral (away from the critical stress region \cite{gong11}) can be written as, \begin{equation} {dK_{\mathrm{od}} \over dp} \approx 2\frac{d\varepsilon_+}{dp}K\, . \end{equation} Using Eq.~\ref{eq:pqrs} and Eq.~\ref{eq:epsilon}, we have, \begin{equation} \frac{d \varepsilon_+}{d p} \approx \frac{1}{2\sqrt{3}}\left(\frac{1}{Q}+\frac{9}{\Delta}\right) \frac{dR^{\ast}}{dp}\, . \label{eq:dvarepsilon} \end{equation} Here, we neglect the change of $Q$ in the denominator. For stress applied along the [110] direction, \begin{equation} \frac{dR^{\ast}}{dp}=-i\frac{1}{4}d_{v}S_{44}. \label{eq:dR} \end{equation} Therefore, we have the $K_{\mathrm{od}}$ change under the stress along [110] direction, \begin{equation} {dK_{\mathrm{od}} \over dp} \approx -i \left(\frac{9Q+\Delta}{4\sqrt{3}Q\Delta}\right)d_{v}S_{44}K\, . \label{eq:dK} \end{equation} Using parameters given in Table \ref{tab:param}, we get $d\varepsilon_+ / dp = i\, 2.455 \times 10^{-4} \mathrm{MPa}^{-1}$. In typical InAs/GaAs QDs, the exciton dark-bright splitting is approximately $2K\sim$ 200$\mu$eV. Therefore we estimate that $d\Delta_{\mathrm{FSS}}/dp \sim$ 0.1 $\mu$eV/MPa, which is in the same order of magnitude with the experimental values\cite{Seidl2006} of $(0.34\pm0.08) \mu $eV/MPa. As shown in the appendix, the VB-CB coupling also contribute to the change FSS in a similar magnitude. Interestingly, as one can see from Eq.~(\ref{eq:dvarepsilon}-\ref{eq:dK}) that stress along the [110] direction only changes the imaginary part of the exchange integral $K_{\mathrm{od}}$, which is just the $\alpha$ parameter defined in Ref. \onlinecite{gong11}. It is also easy to verify that stress along the [100] or [010] direction only changes the real part of the exchange integral $K_{\mathrm{od}}$, which is $\beta$ defined in Ref. \onlinecite{gong11}. (Note that the basis of exciton wave function in Eq.~\ref{eq:exciton} is different than that used in Ref. \onlinecite{gong11}). For dots with $C_{2v}$ symmetry, in which $K_{\mathrm{od}}$ has only an imaginary part, the stress along the [110] direction alone can tune the FSS to zero, whereas for dots with $C_1$ symmetry, in which $K_{\mathrm{od}}$ has both a real part and an imaginary part, the FSS cannot be tuned to zero under single uniaxial stress. However, since the stresses along the [110] and [100] directions can manipulate the imaginary and real parts of $K_{\mathrm{od}}$ (almost) independently, the FSS can be tuned to nearly zero, as predicted in our previous work.~\cite{wang12} \begin{figure} \includegraphics[width=0.4\textwidth]{./Kexch.eps} \caption{\label{fig:Kex} (Color online) The off-diagonal electron-hole exchange interactions $|K_{\rm od}|$ in a lens-shaped InAs/GaAs QD ($D$= 25 nm, $h$= 3.5 nm) as functions of the uniaxial stress along the [110] direction. The black line is the total $|K_{\rm od}|$, whereas the red and blue lines are the contribution from VB mixing and VB-CB coupling respectively. } \end{figure} Of course the above Bir-Pikus model is highly simplified. The real QDs are far more complicated. To see if this model is valid, we perform EPM calculations on realistic InAs/GaAs QDs under external stresses. The dots are assumed to be grown on the [001] direction and embedded in a $60\times 60\times 60$ GaAs supercell. The atom positions in the supercell are optimized by the valence force field method.\cite{keating66,martin70} We solve the single-particle states by expanding the wavefunctions with a strained linear combination of Bloch bands method (SLCBB).\cite{wang99b} The exciton energies are calculated by the CI method,\cite{Franceschetti1999} in which the exciton wavefunctions are expanded in Slater determinants constructed from all confined electron and hole single-particle states. To compare with the Bir-Pikus model, we project the first hole single-particle wave function to the $|j, j_z\rangle$ states at $\Gamma$ point. \cite{wei12} The results are shown in Fig.~\ref{fig:mixing}, compared with those from model calculations. The solid squares represent the amplitude of different components obtained from empirical pseudopotentials calculations by integrating the envelop functions of each component over the whole supercell. The blue lines are the results obtained from Bir-Pikus model, using deformation potentials for strained InAs given in Table \ref{tab:param}. The wave functions are normalized to 1 in both cases. We see that the EPM and Bir-Pikus model results are in a reasonable good agreement with each other (Note that the scale of the figure is extremely small). In the Bir-Pikus model, HH$_+$ and HH$_-$ components do not mix with each other, whereas in the EPM calculations, there is small mixing of HH$_+$ and HH$_-$ states, because the SLCBB method use Bloch basis functions from many $k$-points around the $\Gamma$ point, whereas the Bir-Pikus model~\cite{pikus} use only the Bloch basis functions at $\Gamma$ point. Importantly the highly simplified Bir-Pikus model gives similar slopes of the magnitude of the components to the external stress as those from atomistic calculations. The quantitative differences between the two theories are due to the neglect of the nonuniform distribution of strain, inter-facial effects, etc. in the Bir-Pikus model. We further compare the exchange integrals $K_{\rm od}$ between the two theories. Figure~\ref{fig:Kex} depicts the EPM calculated exchange integral $K_{\rm od}$ in a pure lens-shaped InAs/GaAs QD with base $D$=25 nm and height $h$=3.5 nm as a function of the stress along the [110] direction. The total $K_{\rm od}$ under external stress, shown in black line, is $0.097$ $\mu$eV/MPa, and the change of FSS under stress is 2$K_{\rm od}$=0.192 $\mu$eV/MPa. We can decompose the change of $K_{\rm od}$ into the contributions from valence bands mixing (red line), and the VB-CB coupling (blue line). The EPM calculated contribution due to valence bands mixing is $0.072$ $\mu$eV/MPa, compared with 0.049 $\mu$eV/MPa from the 6$\times$6 Bir-Pikus model, and the EPM calculated contribution from VB-CB coupling is 0.025 $\mu$eV/MPa compared with 0.036 $\mu$eV/MPa from the 8x8 model Bir-Pikus model discussed in the Appendix. It is quite surprising that the highly simplified Bir-Pikus model could catch the change of FSS under external strain rather well, especially, it is known that the $k$$\cdot$$p$ theory greatly underestimates the FSS in the QDs. \cite{seguin05} The reason is as follows. The absolute values of FSS are determined by the combined effects of the strain distributions, alloy, interfacial effects etc., which can not be captured well by the continuum theories. However, these atomistic effects do not change much if the applied external stress is not too large. On the other hand, the underlying crystal structure and electronic structure change coherently under the applied external stress which break explicitly the $C_{4v}$ symmetry of the system. Therefore, we expect that the {\it change} of the FSS under stress can be capture rather well by the Bir-Pikus model, even though the absolute vale of the FSS could be dramatically underestimated. \begin{figure} \includegraphics[width=3.4in]{./dipole.eps} \caption{\label{fig:dipole} (Color online) Schematic plot of polar diagrams of (a) dot with $C_{2v}$ symmetry and (b) dot with $C_{1}$ symmetry.} \end{figure} \section{Exciton polarization angle} \label{sec:polarization} We now discuss the polarization properties of the two bright exctions using the above Bir-Pikus model. The transition dipole matrix elements are given by, \begin{equation} \mathcal{M}=\bra{0}\hat{\bf n}\cdot{\bf r} \ket{\Psi_X}\, , \end{equation} where $\hat{\bf n}$ is the polarization vector, and $\Psi_X$ is the exciton wave function, which is obtained by diagonalize Eq. \ref{eq:exciton}. The emission intensities of the two bright excitons, passing through a linear polarizer with an angle $\alpha$ with respect to the [100] axis are given by,~\cite{Tonin2012} \begin{eqnarray} I_{B_1}(\alpha) &=& I_0[\cos(\theta+\alpha)+|\varepsilon_+|\cos(\theta+\phi_\varepsilon-\alpha)]^2 \, , \nonumber \\ I_{B_2}(\alpha) &=& I_0[\sin(\theta+\alpha)+|\varepsilon_+|\sin(\theta+\phi_\varepsilon-\alpha)]^2\, , \label{eq:polarization} \end{eqnarray} with $\phi_{\varepsilon}=\mathrm{arg}(\varepsilon_+)$, whereas 2$\theta=-\mathrm{arg}(K_{\rm od})$ is the argument of $K_{\rm od}$, as shown in Fig.~\ref{fig:dipole}. Eq.~\ref{eq:polarization} is similar to the one proposed by Tonin et. al., \cite{Tonin2012} however, the interpretation to the equations is very different. In Ref.~\onlinecite{Tonin2012}, $\theta$ is the main elongation axis orientation with respect to [1$\bar{1}$0] determined by the growing process, which would not change under the external strain. In our model, the polarization angle $\theta$ is determined by the argument of $K_{\rm od}$. When stress modifies the exchange integral $K_{\rm od}$ and its argument, the exciton states rotate in the $x$-$y$ plane accordingly, consistent with the effective model proposed by the authors in Ref.~\onlinecite{gong11}. In the presence of band mixing $\varepsilon_+$, the angle between the two states in the $x$-$y$ plane will deviate from $\pi/2$ slightly. The polarization angle between the two bright exciton states in the $x$-$y$ plane is, \begin{eqnarray} \Delta\phi&=&|\phi_{B_2}-\phi_{B_1}|=\frac{\pi}{2} +\arctan\left[-\frac{2|\varepsilon_+|\sin(2\theta+\phi_{\varepsilon})} {1-|\varepsilon_+|^2} \right] \nonumber \\ &=&\frac{\pi}{2}-2\sin [(2\theta+\phi_\varepsilon)] |\varepsilon_+|+O(|\varepsilon_+|^3)\, . \end{eqnarray} For dots with $C_{2v}$ symmetry, we have $\phi_{\varepsilon}$= 2$\theta$=$\frac{\pi}{2}$ according to the analysis in Sec. \ref{sec:FSS}, and therefore $\Delta\phi={\pi \over 2}$. The two emission lines are perpendicular to each other in the $x$-$y$ plane and aligned in the [110] and [1$\bar{1}$0] directions respectively [See Figure \ref{fig:dipole}~(a)]. This is still true when the dots are under stress along the [110] direction. For dot with $C_{1}$ symmetry, we have $e_{xx}\neq e_{yy}$ and $e_{zx}\neq e_{yz}$. Therefore $\phi_{\varepsilon}$ and 2$\theta$ will deviate from $\frac{\pi}{2}$. The polarization angles of the two emission lines are, \begin{equation} \begin{split} \phi_{B_1}&=\arctan\left[- \frac{\sin\theta + |\varepsilon_{+}|\sin(\theta+\phi_{\varepsilon})} {\cos\theta - |\varepsilon_{+}|\cos(\theta+\phi_{\varepsilon})} \right] \\ &=-\theta-\sin[(2\theta+\phi_\varepsilon)] |\varepsilon_{+}| +O(|\varepsilon_+|^2)\, , \end{split} \end{equation} and \begin{equation} \begin{split} \phi_{B_2}&=\arctan\left[ \frac{\cos\theta + |\varepsilon_{+}|\sin(\theta+\phi_{\varepsilon})} {\sin\theta - |\varepsilon_{+}|\cos(\theta+\phi_{\varepsilon})}\right] \\ &=\frac{\pi}{2}-\theta+\sin[(2\theta+\phi_\varepsilon)] |\varepsilon_{+}| +O(|\varepsilon_+|^2)\, , \end{split} \end{equation} with respect to [100] direction [See Figure \ref{fig:dipole}~(b)]. In this case, $\Delta\phi \neq$ ${\pi}/{2}$, and the magnitude of the deviation is proportional to the band mixing parameter $\varepsilon_+$. \section{Summary} \label{sec:summary} We derive analytically the exciton fine structure splitting under the external stress in the self-assembled InAs/GaAs quantum dots using the Bir-Pikus model. We find that the FSS change is mainly due to the strain induced valence bands mixing and valence-conduction band coupling. The polarization angle change under strain is due to the change of the complex phase of the electron-hole off-diagonal exchange integrals. The derived theory agrees well with the effective theory and the empirical pseudopotential calculations, and therefore bridge the gap between the two theories. \acknowledgments LH acknowledges the support from the Chinese National Fundamental Research Program 2011CB921200, and the National Natural Science Funds for Distinguished Young Scholars.
1401.2706
\section{Introduction} Unbiased bases are a fundamental concept in the theory quantum kinematics as they are intimately related to Bohr's Complementarity Principle \cite{englertbook}. In a finite dimensional Hilbert space, two orthonormal bases, are said to be unbiased if, and only if, the transition probability from any state of one basis to any state of the second basis is constant, i.e., independent of the chosen states. In a $d$-dimensional Hilbert space there are at most $d{+}1$ bases which are pairwise unbiased \cite{ivanovic81}. This set is called the set of mutually unbiased bases (MUB). MUB are studied in various context in quantum mechanics. They are used in thought experiments such as the so called ``mean king problem''~\cite{engl01,arav03}, they were shown to have interesting connections with symmetric informationally complete positive-operator-valued measures~\cite{woot06}, complex $t$-designs~\cite{klap05,gros07}, and with graph state formalism~\cite{spen13}. Beyond being of fundamental interest, MUB have practical importance as well. They play an important role in quantum error correction codes~\cite{gott96,cald97}, quantum cryptography for secure quantum key exchange~\cite{brub02,cerf02}, quantum state tomography~\cite{wootters89,adam10}, and more recently in the detection of quantum entanglement~\cite{speng12}. It is therefore been of great effort and research interest to construct the complete set of MUB. To date, numerous construction methods of complete sets of MUB are known in power of prime dimensions \cite{ivanovic81,wootters89,tal02,klap04,durt05,durt10,spen13}, and each method provides a useful and different insight for the problem of the existence of MUB. Alas, it is still not known whether complete sets of MUB exist in all finite dimensions. A strong numerical evidence suggests that they do not exist in dimensions which are not power of prime \cite{grassl04,brie08,brie10,raynal11}. In this work we propose a new approach for `unbiasedness', by generalizing this notion from bases to measurements. More particularly, we consider unbiased measurements in a $d$-dimensional Hilbert space, such that unbiased bases are a special, limiting, case. In fact, this idea enables us to construct {\it all} possible complete sets of $d+1$ mutually unbiased measurements (MUM) in a $d$-dimensional Hilbert space. Therefore, if a complete set of MUB exists, it must be a particular case of this construction. Naturally, this generalization can be useful to study questions relevant to MUB. One interesting question, for example, is how close can $d+1$ MUM get to a complete set of MUB in a given dimension, say, in dimension 6? Beside their relevance to MUB, MUM may be of interest of their own. We show that they provide a linear inversion formula for quantum state tomography, and that they abide by entropic uncertainty relation, similar to MUB. In order to formulate the notion of MUM, we first briefly recall the measurement formalism in quantum mechanics. Generally, a measurement in quantum mechanics is described by a set of positive operators (sometimes called measures) $M_j\geq0$ that sum to the identity operator, ${\sum_j}{M_j}=1$. The probability of the $j$th outcome is given by Born's rule, ${\rm Tr}(M_j\rho)$, where $\rho$ is the state (statistical operator) of the quantum system. This representation of a measurement is therefore called a probability-operator measurement (POM), or equivalently a positive-operator-valued measure (POVM). Clearly a basis of a finite-dimensional Hilbert space defines both a set of states and a measurement. In particular, consider a set of $d+1$ MUB in a $d$-dimensional Hilbert space, $\{\ket{\psi_{n}^{(b)}}\}$ where $b=1,2,\ldots,d+1$ labels the basis while $n=1,2,\ldots, d$ labels the vectors within a basis. The set of projectors on the $b$th basis vectors, ${\cal B}^{(b)}=\{B^{(b)}_n=\ket{\psi_{n}^{(b)}}\bra{\psi_{n}^{(b)}}|n=1,2,\ldots, d\}$, form a measurement, $B^{(b)}_n\geq0$, and $\sum_nB^{(b)}_n=1$, $\forall b$, with the defining properties of MUB, \begin{align}\label{mubPOM} {\rm Tr} (B^{(b)}_n)=&1,\nonumber\\ {\rm Tr} (B^{(b)}_n B^{(b')}_{n'})=&\delta_{n,n'}\delta_{b,b'}+(1-\delta_{b,b'})\frac1{d}. \end{align} The $B^{(b)}_n$s can be thus regarded as quantum states and as measurement operators. Therefore, the unbiasedness of two bases could be re-stated as a property of two measurements as follows: In a $d$-dimensional Hilbert space, measurements of two bases are unbiased if and only if when the system is in any state of one basis (measurement), the probability distribution upon measuring it in the second measurement, is completely random. The notion of unbiasedness of measurements of bases can be therefore generalized to general measurements.\\ \begin{definition} Two measurements on a $d$-dimensional Hilbert space, ${\cal P}^{(b)}=\{P^{(b)}_n|P^{(b)}_n\geq0,\;\sum_{n=1}^{d}P^{(b)}_n=1\}$, $b{=}1,2$, with $d$ elements each, are said to be \emph{mutually unbiased measurements} (MUM) if, and only if, \begin{align}\label{muPOM} {\rm Tr} (P^{(b)}_n)&=1,\nonumber\\ {\rm Tr} (P^{(b)}_n P^{(b')}_{n'})&=\delta_{n,n'}\delta_{b,b'}\kappa+(1-\delta_{n,n'})\delta_{b,b'}\frac{1-\kappa}{d-1}\nonumber\\&+(1-\delta_{b,b'})\frac1{d}. \end{align} \end{definition} According to this definition, each measurement operator $P^{(b)}_n$ can be also regarded as a quantum state, for which ${\rm Tr}(P^{(b)}_n P^{(b')}_{n'})=1/d$, $\forall b\neq b'$. Therefore, indeed, if the system is in any state, say of ${\cal P}^{(1)}$, then the probability distribution of measuring it in a second measurement, say ${\cal P}^{(2)}$, is completely random ($1/d$). The inner product of two elements within the same measurement depends on the {\it efficiency parameter} $\kappa$. The value of this parameter determines how close the measurements operators to rank one projectors, i.e., to MUB. The latter is obtained for $\kappa=1$. The other extreme is $\kappa=\frac1{d}$, which corresponds to the trivial case where all the measurement operators are equal to the completely mixed state. We therefore conclude that the efficiency parameter satisfies, \mbox{$\frac1{d}<{\kappa}\leq1$}~\cite{footnote2}. Before moving on, we note that, within this definition, the purity of the states (that is, of the measurements' operators) $P^{(b)}_n$ is a constant equals to $\kappa$. One may consider more general definitions of MUM in which the purity of the states depends, for example, on the measurement label $b$ and on the outcome label $n$, $\kappa_n^{(b)}$. These definitions, however, result in less symmetric MUM, which are the primer objective of our study. The definition above allows us to construct $d+1$ MUM in a $d$-dimensional Hilbert space. Consider an orthogonal basis, $F_k$, for the space of hermitian, traceless operators acting on a $d$-dimensional Hilbert space. Such a basis is composed of $d^2-1$ operators, $F_k=F_k^\dagger$, satisfy ${\rm Tr}(F_k)=0$, and ${\rm Tr}(F_k F_l)=\delta_{k,l}$. We arrange the basis elements on a grid of $d-1$ columns and $d+1$ rows, \begin{align} \begin{matrix} F_1 & F_2 & \cdots &F_{d-1} \\ F_d & F_{d+1} & \cdots &F_{2(d-1)} \\ \vdots & \vdots & \vdots &\vdots \\ \cdots & \cdots & \cdots &F_{(d+1)(d-1)}. \end{matrix} \end{align} It is convenient to re-label the operators by a double index $(n,b)$ according to their (column, row) location $n=1,2,\ldots,d-1$, $b=1,2,\ldots,d+1$, \begin{align}\label{Fs} \begin{matrix} F_{1,1} & F_{2,1} & \cdots &F_{d-1,1}\\ F_{1,2} & F_{2,2} & \cdots &F_{d-1,2}\\ \vdots & \vdots & \vdots &\vdots \\ F_{1,d+1} & F_{2,d+1} & \cdots &F_{d-1,d+1}. \end{matrix} \end{align} Next, we define the $d(d+1)$ operators \begin{align}\label{Fbv} F^{(b)}_n=\begin{cases} F^{(b)}-(d+\sqrt{d}) F_{n,b}& \text{for } n=1,2,\ldots,d-1 \\ (1+\sqrt{d}) F^{(b)} & \text{for } n =d \end{cases} \end{align} with $b=1,2,\ldots,d+1$, and $F^{(b)}$ being the sum of the basis elements on the $b$th row, i.e., $F^{(b)}=\sum_{n=1}^{d-1}F_{n,b}$. This definition ensures the properties, \begin{align}\label{sumFbv} {\rm Tr}(F^{(b)}_n F^{(b)}_{n'})&=(1+\sqrt{d})^2[\delta_{nn'}(d-1)-(1-\delta_{nn'})],\nonumber\\ \sum_{n=1}^d F^{(b)}_n&=0, \end{align} which will be used later. We note by passing that by construction, \begin{equation} {\rm Tr}(F^{(b)}_n F^{(b')}_{n'})=0, \forall b\neq b', \;\forall n,n'=1,2,\ldots,d. \end{equation} \begin{theorem}\label{tm:1} The operators, \begin{equation}\label{PinF} P^{(b)}_n=\frac1{d}+t F^{(b)}_n, \end{equation} with $b=1,2,\ldots,d+1$, $n=1,2,\ldots,d$, and the free parameter $t$ chosen such that $P^{(b)}_n\geq0$, form $d+1$ MUM in a $d$-dimensional Hilbert space, where $b$ labels the measurement and $n$ labels the outcome. Moreover, any complete set of MUM have this form. \end{theorem} \begin{proof} To show that the $P^{(b)}_n$s of Eq.~(\ref{PinF}) indeed form MUM, one can verify that they satisfy the definition of MUM, Eq.~(\ref{muPOM}), with efficiency parameter \begin{equation}\label{effpara} \kappa=\frac1{d}+t^2(1+\sqrt{d})^2(d-1). \end{equation} The fact that all MUM have the structure of Eq.~(\ref{PinF}) follows by assuming this structure and showing that indeed the $F_{n,b}$s form a basis for the space of traceless hermitian operators. \end{proof} The efficiency parameter, given in Eq.~(\ref{effpara}), obtains its maximal value $\kappa=1$ for $t^2(1+\sqrt{d})^2=\frac1{d}$. In this case the MUM are actually MUB. Clearly, the efficiency parameter is determined by the free parameter $t$, which in turn is set such that $P^{(b)}_n\geq0$. This requirement implies that the range of $t$ is \begin{equation}\label{tIneq} -\frac1{d}\frac1{\lambda_{\rm max}}\leq t\leq\frac1{d}\frac1{|\lambda_{\rm min}|} \end{equation} where $\lambda_{\rm min}=\min_b\lambda_{\rm min}^{(b)}$, $\lambda_{\rm max}=\max_b\lambda_{\rm max}^{(b)}$, and $\lambda_{\rm min}^{(b)}$ and $\lambda_{\rm max}^{(b)}$ are the smallest (negative) and largest (positive) eigenvalues of the operators $F^{(b)}_n$ of Eq.~(\ref{Fbv}) with $n=1,2,\ldots,d$, respectively. [Since ${\rm Tr} (F^{(b)}_n)=0$, these operators must have both negative and positive eigenvalues.] The larger the $t$, in its magnitude, the larger the efficiency parameter would eventually be. Therefore we define the optimal $t$ to be \begin{equation}\label{tbopt} t_{\rm opt}=% \begin{cases} \frac1{d}\frac1{|\lambda_{\rm min}|}& \text{for } |\lambda_{\rm min}|<\lambda_{\rm max}, \\ -\frac1{d}\frac1{\lambda_{\rm max}}& \text{for } \lambda_{\rm max}<|\lambda_{\rm min}|. \end{cases} \end{equation} The optimal efficiency parameter of a MUM is given by $\kappa_{\rm opt}=\frac1{d}+t_{\rm opt}^2(1+\sqrt{d})^2(d-1)$. This choice of $t_{\rm opt}$ ensures $P^{(b)}_n\geq0$ $\forall b$ and $n$. The value of $t_{\rm opt}$ depends on the particular choice of the operator basis $F^{(b)}_n$. For a given choice, $\kappa_{\rm opt}$ sets an upper bound on how close we can get to MUB. The question whether a complete set of MUB exists in any finite dimension is then translated to the question whether there exists an operator basis for which $\kappa_{\rm opt}=1$. In the Supplementary Information section we show that for the case where the traceless hermitian operator basis is the generalized Gell-Mann operator basis, an optimal efficiency parameter can be analytically calculated, $\kappa_{\rm opt}=\frac1{d}+\frac2{d^2}$. In a sense, the Gell-Mann operator basis is not a good choice for a basis, since the optimal efficiency parameter very close to its minimal value $\frac1{d}$ (except for $d=2$, for which it equals 1). Of course, one may resort to numerical methods to estimate better value of $\kappa_{\rm opt}$ in a given dimension. The MUM, share various properties with MUB. For example, the set of the $d+1$ MUM of Eq.~(\ref{muPOM}) are informationally complete, that is, any state of the system is determined completely by the MUM outcomes' probabilities, $p^{(b)}_n={\rm Tr}(P^{(b)}_n\rho)$. In fact, much like MUB, the MUM provide a linear inversion relation, \begin{equation}\label{rho in terms of R} \rho=\sum_{n,b}p^{(b)}_nR^{(b)}_n, \end{equation} where the reconstruction operators $R^{(b)}_n$ associated with the MUM are linear function of the outcomes $P^{(b)}_n$, \begin{equation} R^{(b)}_n=\frac{d-1}{\kappa d-1}\left(P^{(b)}_n-\frac{d-\kappa}{d^2-1}\right). \end{equation} The reconstruction operators satisfy their defining property, \begin{equation} {\rm Tr}(\rho P^{(b)}_{n})=\sum_{n',b'}p^{(b')}_{n'}{\rm Tr}(R^{(b')}_{n'} P^{(b)}_{n})=p^{(b)}_n. \end{equation} In the Supplementary Information section we are considering a measurement related to MUM with $d^2$ outcomes which, as well, provides a linear inversion formula for quantum state tomography. Beyond the mathematical generalization of MUB, the MUM have a physical significance of their own. As discussed above, the MUM elements, $P^{(b)}_n$ of Eq.~(\ref{PinF}), can be regarded as quantum states, and the outcome of measuring a state of one measurement in any other MUM, is completely random. We now formalize this aspect of complementarity of the MUM by showing that they satisfy a non-trivial entropic uncertainty relation similar to the one satisfied by MUB. When the latter exist, they satisfy the strong entropic uncertainty relation \cite{ivanovic92,wehner10}, \begin{equation}\label{mubIneq} \frac1{d+1}\sum_{b=1}^{d+1}H({\cal B}^{(b)},\rho)\geq\log\frac{d+1}{2}, \end{equation} where $H({\cal B}^{(b)},\rho)=-\sum_{n=1}^{d}p^{(b)}_n\log p^{(b)}_n$ is the Shannon entropy of the probability distribution, $p^{(b)}_n=\matele{\psi^{(b)}_n}{\rho}{\psi^{(b)}_n}$, associated with measuring the system $\rho$ in MUB measurement ${\cal B}^{(b)}=\{ \ket{\psi^{(b)}_n}\bra{\psi^{(b)}_n}|n=1,2,\ldots,d\}$. \begin{theorem}\label{tm:2} The complete set of $d+1$ MUM $\{{\cal P}^{(b)}\}$ of Eq.~(\ref{PinF}) in a $d$-dimensional Hilbert space, satisfies the entropic uncertainty relation, \begin{equation}\label{thm2} \frac1{d+1}\sum_{b=1}^{d+1}H({\cal P}^{(b)},\rho)\geq\log\frac{d+1}{1+\kappa}, \end{equation} where $\kappa$ is the efficiency parameter. \end{theorem} The proof is given in the Supplementary Information section. Note that the inequality of Eq.~(\ref{mubIneq}) abide by the MUB is a particular instance of Eq.~(\ref{thm2}) for the maximal value of the parameter efficiency $\kappa$, $\kappa=1$. The MUM with an optimal efficiency parameter, $\kappa_{\rm opt}$, satisfy Eq.~(\ref{thm2}) with $\kappa=\kappa_{\rm opt}$. The inequality $\log\frac{d+1}{1+\kappa}\geq\log\frac{d+1}{2}$ for $\kappa\in(\frac1{d},1]$, apparently indicates that the smaller $\kappa$ the stronger the inequality. This counter intuitive result, can be resolved by noting that for the minimum value of $\kappa=\frac1{d}$, the measurements operators are the completely mixed state, and as such do not provide any information about the state of the system; hence the uncertainty is the largest. This implies that the uncertainty of each measurement must be taken into account before concluding about mutual unbiasedness of the MUM. Indeed, there is a distinction between the uncertainty relation of Eq.~(\ref{mubIneq}) abide by MUB and the uncertainty relation of Eq.~(\ref{thm2}) abide by general MUM. This distinction originates in the difference between a measurement of a basis and a general measurement, such as MUM. To be more precise, lets define a state-dependent uncertainty associated with a measurement ${\cal M}$ as, say, the Shannon entropy of its probability distribution, $H({\cal M},\rho)$. Now suppose that ${\cal M}$ is a measurement of a basis. In this case there always exist states--- the basis states--- for which the state-dependent uncertainty is zero. It is therefore reasonable to define a state-independent uncertainty associated with a measurement as the minimum of the state-dependent measurement uncertainty, \begin{equation} \Delta{\cal M}=\min_\rho H({\cal M},\rho). \end{equation} A measurement has in general an uncertainty $\Delta{\cal M}$ larger than than zero. Going back to the uncertainty relations, we conclude that since the MUB are measurements with zero uncertainty each, the entropic uncertainty relation of Eq.~(\ref{mubIneq}) captures the complementarity, or unbiasedness, aspect of the measurements. In contrast, each one of the MUM are uncertain in general. Therefore, though Eq.~(\ref{thm2}) provides an entropic uncertainty relation, it includes the `self' uncertainty of each measurement. To account for the unbiasedness feature of the MUM we must subtract the uncertainty of each measurement, \begin{equation} \upsilon=\frac1{d+1}\sum_{b=1}^{d+1}[H({\cal P}^{(b)},\rho)-\Delta{\cal P}^{(b)}], \end{equation} so that the proper complementarity entropic uncertainty reads \begin{equation} \upsilon\geq\log\frac{d+1}{1+\kappa}-\frac1{d+1}\sum_{b=1}^{d+1}\Delta{\cal P}^{(b)}. \end{equation} Techniques, for example, introduced in~\cite{friedland13} may be used to calculate $\Delta{\cal P}^{(b)}$. We have numerically searched for $\Delta{\cal P}^{(b)}$ for the case where the MUM are constructed by the generalized Gell-Mann operator basis as given in the Supplementary Information section. Searching over $10^6$ states in six-dimensional Hilbert space of random rank, we found that the MUM satisfy a non-trivial complementarity relation, as described in Fig.~\ref{fig:d6} and its caption. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{dim6gm.eps} \caption{Measure of unbiasedness $\upsilon$ as a function of $\kappa$ for MUM on a six-dimensional Hilbert space constructed form the Gell-Mann operator basis described in the Supplementary Information section. The range of $\kappa$ plotted is $\frac{1}{d}<\kappa\leq\kappa_{\rm opt}=\frac{1}{d}+\frac{2}{d^2}$. For a given $\kappa$ value, the complementarity on the MUM is larger than $\upsilon$. } \label{fig:d6} \end{figure}% To conclude, in this work we generalized the notion of unbiasedness of bases, MUB, to general measurements, MUM. We constructed the complete set, i.e., $d+1$ MUM in a $d$-dimensional Hilbert space, where MUB, when exist, are a particular case thereof. This construction can be used to study, either analytically or numerically, the problem of the existence of MUB in non power of prime dimensions, and may be helpful to obtain bounds on how close we can get to MUB in these cases. We showed that this mathematical generalization captures the physical essence of unbiased bases in two principle aspects. First, that the probability of obtaining any outcome upon preparing the system in one of measurement elements, and measuring it in the other measurement, is independent of the prepared state and the measurement outcome. And second, a complete set of MUM satisfies a non-trivial entropic uncertainty relation, similar to MUB. Moreover, this relation answers an interesting question regarding the existence of entropic uncertainty relation among $d+1$ measurement settings in finite $d$-dimensional Hilbert space~\cite{wehner10}. \emph{Acknowledgments:---} A.K. research is supported by NSF Grant PHY-1212445. G.G. research is supported by NSERC.
2208.07400
\section{Introduction} \label{sec:intro} \input{sections/1_intro} \section{\procsearch{}} \label{sec:search} \input{sections/2_search} \section{Empirical Comparison} \label{sec:compare} \input{sections/3_compare} \section{Related Work} \label{sec:related} \input{sections/4_related} \section{Conclusion} \label{sec:conclu} \input{sections/5_conclu} \section*{Ethical Considerations and Broader Impacts} \label{sec:ethics} \input{sections/7_broader_impacts} \section*{Acknowledgements} \label{sec:acknow} \input{sections/6_acknow} \subsection{System Overview} \synkb{} is an open-source system that allows chemists to perform structured queries over large corpora of synthesis procedures. In this section, we present each component of \synkb{}, as illustrated in Figure \ref{fig:overview}. Our corpus collection is first presented in \S \ref{sec:data_collect}. Section \ref{sec:data_process} describes how a corpus of six million procedures is annotated with sentence-level action graphs, in addition to protocol-level slots relevant to chemical reactions, including starting materials, solvents, reaction products, yields, etc. After automatically annotating and indexing, we experiment with the semantic search capabilities enabled by \synkb{} in \S \ref{sec:sys_feature}. \subsection{Corpus Collection} \label{sec:data_collect} We extract structured representations of synthetic protocols from a corpus of chemical patents \cite{bai-etal-2021-pre}, which includes over six million chemical synthesis procedures extracted from around 300k U.S. and European patents (written in English). The U.S. portion of this corpus comes from an open-source corpus of chemical synthesis procedures \citep{uspto_lowe}, which covers 2.4 million synthetic procedures extracted from U.S. patents (USPTO\footnote{ \url{https://www.uspto.gov/learning-and-resources/bulk-data-products}}, 1976-2016). For the European portion, we apply the \citet{uspto_lowe} reaction identification pipeline to European patents. Specifically, we download patents from EPO\footnote{ \url{https://www.epo.org/searching-for-patents/data/bulk-data-sets.html}} (1978-2020) as XML files and select patents containing the IPC (International Patent Classification) code ‘C07’ for processing as they are in the category of organic chemistry. Next, the synthesis procedure identifier developed by \citet{Lowe2012ExtractionOC}, a trained Naive Bayes classifier, is applied to the {\em Description} section of all selected patents. As a result, we obtain another 3.7 million procedures from European patents. \subsection{Extracting Reaction Details from Synthetic Procedures} \label{sec:data_process} To facilitate semantic search, we automatically annotate the corpus of 6 million synthetic procedures described above with semantic action graphs \cite{kulkarni2018annotated} in addition to chemical reaction slots \cite{Nguyen2020ChEMUNE} using Transformer models that are pre-trained on a large corpus of scientific procedures \cite{bai-etal-2021-pre}. \paragraph{Shallow Semantic Parsing.} We first perform sentence-level annotation, where each step in the procedure is annotated with a semantic graph \citep{tamari-etal-2021-process}. Nodes in the graph are experimental operations and their typed arguments, whereas labeled edges specify relations between the nodes (see the example shallow semantic parse in Figure \ref{fig:overview}). Here we use the \chemsyn{} framework \citep{bai-etal-2021-pre}, which covers 24 types of nodes (such as \textit{Action}, \textit{Reagent}, \textit{Amount}, \textit{Equipment}, etc.) and 17 edge types (e.g. \textit{Acts-on} and \textit{Measure}). With these annotated semantic graphs, users can search for operation-level information, for example, the amount of \texttt{DMF} when used as a solvent to dissolve \texttt{HATU} (this will be further discussed in \S \ref{sec:compare}). Following \citet{tamari-etal-2021-process}, we split semantic graph annotation into two sub-tasks, Mention Identification (MI) for node prediction and Argument Role Labeling (ARL) for edge prediction. We use the same fine-tuning architectures as in \citet{tamari-etal-2021-process}. Models are fine-tuned on the \chemsyn{} corpus, which consists of 992 chemical synthesis procedures extracted from patents, and the resulting performance (averages across five random seeds) is shown in Table \ref{tab:procbert_results}. We select model checkpoints via the Dev set performance out of five random seeds, and use the selected checkpoint for inference on our 6 million synthetic procedures. \paragraph{Slot Filling.} In the second task, we annotate procedures from a protocol perspective, i.e., identifying key entities playing certain roles in a protocol, which can be queried in a slot-based search. We use the \chemu{} training corpus proposed in \citet{Nguyen2020ChEMUNE}. This dataset includes 10 pre-defined slot types concerning chemical compounds and related entities in chemical synthesis processes such as \textit{Starting Material}, \textit{Solvent}, and \textit{Product}. Similar to the Mention Identification task, we treat Slot Filling as a sequence tagging problem. However, the input in Slot Filling is the entire protocol, rather than a single sentence, as in mention identification. We fine-tune models on the \chemu{} dataset (see Table \ref{tab:procbert_results} for results), and then run inference on the chemical patent corpus using the learned model. \paragraph{ProcBERT.} We use \procbert{} \citep{bai-etal-2021-pre}, a BERT-based model that is pre-trained on in-domain data (scientific protocols), as the backbone for all of our models, and develop task-specific fine-tuning architectures on top of it. The comparison between \procbert{} and other pre-trained models is presented in Table \ref{tab:procbert_results}. Because \procbert{} is pre-trained using in-domain data, we find that it outperforms both BERT$_\text{large}$ \citep{devlin-etal-2019-bert} and SciBERT \citep{Beltagy2019SciBERT} on all three tasks. \begin{table}[h!] \small \begin{center} \scalebox{0.72}{ \begin{tabular}{lcccc} \toprule \multirow{2}{*}{\textbf{Annotation Task}} & \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{3}{c}{\textbf{Pre-trained Model}} \\ & & \textsc{BERT}\textsubscript{large} & SciBERT & \procbert{} \\ \midrule Mention Identification & \multirow{2}{*}{\chemsyn{}} & 95.26\textsubscript{0.1} & 95.82\textsubscript{0.2} & \textbf{95.97}\textsubscript{0.2} \\ Argument Role Labeling & & 92.87\textsubscript{0.5} & 93.27\textsubscript{0.2} & \textbf{93.57}\textsubscript{0.2} \\ \midrule Slot Filling & \chemu{} & 95.10\textsubscript{0.2} & 95.63\textsubscript{0.1} & \textbf{96.19}\textsubscript{0.1} \\ \bottomrule \end{tabular} } \end{center} \caption{\label{tab:procbert_results} Test set F\textsubscript{1} scores of fine-tuned models for the three annotation tasks. These numbers, averages across five random seeds with standard deviations as subscripts, are taken from our previous work \citet{bai-etal-2021-pre}. Models using \procbert{} for contextual embeddings perform the best on all three tasks and are used for automatic annotations on six million synthesis procedures to construct \synkb{}. } \end{table} \begin{table}[h!] \small \begin{center} \scalebox{0.78}{ \begin{tabular}{lcccc} \toprule & \textbf{\synkb{} (ours)} & \textbf{\uspto{}} & \textbf{\reaxys{}} \\ \midrule License & Open source & Open source & Subscription \\ \# Procedures (mill.) & 6 & 2.4 & 57 \\ \# Entity Types & 24 & 8 & 10 \\ \# Relation Types & 17 & - & - \\ Annotation & Automatic & Automatic & Manual \\ \bottomrule \end{tabular} } \end{center} \caption{\label{tab:database_comp} Comparison between our \synkb{} and two performant databases. Our \synkb{} provides more fine-grained annotations (more entity types and unique relation annotations) than the other two systems and covers more procedures than \uspto{}, a database built using the largest open-source synthesis procedure corpus \citep{uspto_lowe}. } \end{table} \subsection{Semantic Search} \label{sec:sys_feature} \synkb{} offers search modalities specific to each of these two forms of annotation, i.e., semantic action graphs and chemical reaction slots, along with features designed to support practical use. The first type of query supported by \synkb{} is \textbf{semantic graph search}, which allows users to search for synthesis procedures based on the semantic parse of the constituent operations. We adapt the graph query formalism proposed originally for syntactic dependencies in \citet{valenzuela-escarcega-etal-2020-odinson}.\footnote{We refer readers to the \href{https://gh.lum.ai/odinson/queries.html}{tutorial} of Odinson query language for more details of this graph query formalism.} Formally, the input query $G = (V, E)$ is a labeled directed graph. Each node $v_i \in V$ is specified as a set of constraints on matching entities (a single or multi-token span). For example, users can specify the node as \texttt{DMF} or \texttt{[word=DMF]}, which triggers an exact match on entity mentions containing the word ``DMF''. They can also constrain the entity type of the node using the expression \texttt{[entity=Type]}.\footnote{We store entity labels with the BIO tagging scheme, so users can match a single token entity with the expression \texttt{[entity=B-Type]} and a multi-token entity with the expression \texttt{[entity=B-Type][entity=I-Type]*}.} Moreover, nodes can be named \texttt{captures} when surrounded with \texttt{(?<name>...)}, e.g., the query \texttt{(?<solvent> DMF)} captures \texttt{DMF} as the \texttt{solvent}. As for the edge $e = (v_i, v_j, l) \in E$, we need to specify the direction and the semantic relation. Considering the query \texttt{(?<solvent> DMF) >measure (?<amount> 1 ml)}, it represents a semantic graph containing two entity nodes captured as \texttt{solvent} and \texttt{amount}, and an edge signaling the \texttt{measure} relation and its direction (from \texttt{solvent} to \texttt{amount}). In addition, \synkb{} supports \textbf{slot-based search}, which presents a structured search interface, with entries corresponding to \chemu{} slots. A keyword entered into any entry restricts the retrieved set to procedures where the extracted slot contains the indicated keyword. Like the graph search, this returns a set of tuples with elements named with matching slots and containing the matching entity strings. The special token \texttt{``?''} can be used to match \emph{any} slot value. As for the implementation, the semantic graph search module is powered by Odinson \citep{valenzuela-escarcega-etal-2020-odinson}, an open-source Lucene-based query engine. Odinson pre-indexes the annotated corpus by generating the inverted index for each procedure. Given an input query, Odinson performs a two-step matching process, where it first examines the node constraints via the inverted index; if this step works well, the semantic relations will be verified in the second step. The two-step matching process improves the speed of Odinson, and thus enables interactive querying. As for the slot-based search, it is supported by Elasticsearch\footnote{\url{https://www.elastic.co/elasticsearch/}} with the exception that, when users perform both types of search at the same time, we use the metadata search feature of Odinson for slot filters (we store slot values as metadata) to improve the system's response speed. \subsection{Slot-based Search Evaluation} \label{sec:slot_eval} We benchmark the slot-based search module of \synkb{} against \reaxys{}, one of the leading proprietary chemistry databases, and \uspto{}, an automatically extracted database built using a large open-source synthesis procedure corpus \citep{uspto_lowe}. Below, we first introduce these two databases briefly, and then evaluate the results of all three systems on the chemist-proposed questions. \subsubsection{Chemistry Databases} The first database we compare with is \textbf{\reaxys{}}, a web-based commercial chemistry database, which contains comprehensive chemistry data, including chemical properties, compound structures, etc. What particularly interests us in \reaxys{} is that it contains expert-curated reaction procedures collected from extensive published literature such as chemistry-related patents and periodicals.\footnote{\url{https://www.elsevier.com/solutions/reaxys/features-and-capabilities/content}} Also, key experimental entities in those reaction procedures, like participating reagents and reaction temperature, are specified. Thus, similar to our slot-based search, \reaxys{} allows users to search for reaction procedure information by applying text filters. Users can use its \textit{Query Builder} module to specify multiple chemical reaction-specific filters, and then \reaxys{} returns all matched reaction procedures along with identified entities in those procedures, which are available for download. Apart from \reaxys{}, we also build a database using \textbf{\uspto{}} \citep{uspto_lowe}, the largest available open-source chemical synthesis procedure corpus as introduced in \S \ref{sec:data_collect}, for comparison. Similar to our \synkb{}, this corpus includes automatic annotations of experimental entities on 2.4 million contained reaction procedures.\footnote{\url{https://www.nextmovesoftware.com/leadmine.html}} However, our \synkb{} provides more fine-grained and comprehensive entity annotations (see Table \ref{tab:database_comp} for the statistics of three experimented databases), and also annotates the relations between extracted entities, which constitute semantic graphs (\S\ref{sec:data_process}) enabling operation-specific semantic graph search. As for the implementation, we load \uspto{}'s entity annotation into Elasticsearch, so this customized database can be used in the same way as the slot-based search module of our \synkb{}. \subsubsection{Comparison with Examples} We now compare three systems on six questions that were proposed by chemists (Q1-Q6) as these questions only require annotations on experimental entities and thus can be answered in all three systems. For example, Q1 (``What solvents are used in reactions involving triphosgene?'') can be answered by the \synkb{} query \texttt{\small\{"reagent":"triphosgene", "solvent":"?"\}}, as \textit{reagent} and \textit{solvent} are query-able ChEMU slots. Similarly, for \reaxys{}, experimental entities are specified for corresponding text filters. We evaluate the output of each system from two perspectives: 1) recall, which is measured by the number of returned procedures containing valid answers and the number of distinct answer slots or captures in these procedures; and 2) precision, the proportion of correct answers among all predicted answers. In cases where the number of answers exceeds 50, we sample 50 answers from the full set to estimate precision. The search queries and performance on each question for the three systems are shown in Table \ref{tab:results}. We can see that, \synkb{} consistently retrieves a larger number of relevant procedures and answers than \reaxys{} (5 out of 6 questions) while maintaining high precision. \uspto{}, which uses a rule-based annotation model, shows competitive performance on precision but trails behind our \synkb{} in terms of recall for all 6 questions. This comparison clearly shows the strength of our system: by leveraging state-of-the-art NLP for chemical synthesis procedures \cite{bai-etal-2021-pre}, we can provide chemists with abundant information, which is non-proprietary and delivered with high precision. Furthermore, we plot the Venn diagram (Figure \ref{fig:unique_answer}) over the retrieved answers, which shows the percentage of unique and shared answers for each system out of all retrieved answers (we do macro-average across six questions.) Interestingly, only 18.1\% of retrieved answers are shared among all three systems, and both our \synkb{} and \reaxys{} contain a large number of unique answers, which take 31.5\% and 17.4\% of retrieved answers respectively. This shift in answer distribution suggests that our open-source \synkb{} can be a good complement to proprietary chemistry databases like \reaxys{}, and it is better for users to use both of them if possible instead of choosing one over the other. \subsection{Semantic Graph Search Evaluation} \label{sec:seman_eval} We evaluate our novel semantic graph search on four operation-specific questions (Q7-Q10). Unlike the six questions introduced above, these questions place constraints on the relations between mentioned entities, and thus are not answerable for \reaxys{} and \uspto{} (due to the lack of relation annotation). For instance, to answer Q7 ``What are the reagents used to dilute \texttt{plasma}?'', a system needs to first locate the particular operation in a procedure where \texttt{plasma} is diluted, and then identify the reagent, which facilitates this dilution operation. This whole process can be realized in our semantic graph search module. Concretely, the graph-based query we use for Q7 is: \texttt{\small ``plasma <acts-on diluted >using (?<reagent> [entity=B-Reagent][entity=I-Reagent]*)''}, which matches procedures containing ``\texttt{plasma}'' and ``\texttt{diluted}'' connected in the same semantic graph and returns used reagents in the form of named captures. We evaluate the performance of the semantic graph search module by manually inspecting predicted answers (randomly sampling 50 answers for Q10), and show results in Table \ref{tab:results}. Similar to the findings in the slot-based search evaluation, \synkb{} shows good coverage while maintaining high precision. \begin{figure}[!t] \centering \includegraphics[width=0.35\textwidth]{figures/venn_diagram-cropped.pdf} \caption{Venn diagram on the answer distribution of six slot-based search questions (macro-average) for all three databases. We can see that both our \synkb{} and \reaxys{} cover high percentage of unique answers, suggesting that users should use them together if possible. } \label{fig:unique_answer} \end{figure}
1707.09137
\section{Introduction} The indistinguishability induced photon bunching effect is the foundation of stimulated emission, multi-photon interference and general statistics \cit {QO,Sun07}. The stimulated emission process is the physical mechanism underlying lasers and superluminescence. With multi-photon interference \cit {HOM,SA,PanRMP,Ou}, optical quantum information processing has been well developed \cite{KLM,KokRMP,OBreinSci}, and the advantages have been demonstrated in quantum computing via the Shor algorithm \cit {Lu07,Lanyon07,OBrien12}, boson sampling \cit {Broome,Spring,Tillmann,Bentivegna15}, and quantum metrology \cit {Vittorio11,Nagata07,SUNEPL,Xiang11}, which has achieved resolutions that extend beyond classical limits and approach the quantum Heisenberg limit. Additionally, for indistinguishable photons, the general photon number distribution shows Bose--Einstein statistics. However, when photons are partially distinguishable, the fidelity of quantum computing and the resolution of quantum metrology quickly decreases as the photon numbers increase, and in certain cases, the advantages of quantum information processing can be lost. Moreover, the photon number distribution will vary considerably from that of Bose--Einstein statistics. For example, the photon number distribution could show Poisson statistics when the photons are totally distinguishable. However, the properties of photon statistics have not been clarified when photons are partially indistinguishable. Moreover, the bunching effect of partially indistinguishable photons has not been resolved. In this study, we discuss the role of photon indistinguishability \cit {Sun09} in photon statistics. By defining and calculating the indistinguishability ($K_{n}$) of an $n$-photon state, the photon bunching effect is presented and analyzed in detail for partially indistinguishable photons. Both the multi-photon indistinguishability and multi-photon bunching effect show exponential decay with increase in the photon number. Consequently, the photon statistical distribution is modified from the Bose--Einstein statistics ($K_{n}=1$) by considering the partially indistinguishable photon state and approaches the Poisson statistics when the indistinguishability is lost ($K_{n}=0$). Because photon indistinguishability induces notable photon bunching at high photon numbers, the statistical transition of photon state may occur. Such a photon statistical transition can be evaluated by the second-order degree of coherence where the transition point highly depends on the photon indistinguishability. In general, the statistical distribution of particles can be described as \begin{equation} P_{\varepsilon }\propto \frac{1}{\mathrm{e}^{\varepsilon /k_{B}T}-S}\text{,} \end{equation where $\varepsilon $ represents the energy, $k_{B}$ represents the Boltzmann constant, and $T$ represents the absolute temperature. The statistical properties of different particles are governed by their spins and the indistinguishability of their quantum states. For indistinguishable Fermions with half-integer spins, $P_{\varepsilon }$ represents Fermi--Dirac statistics with $S=-1$; while for Bosons with integer spins, it shows Bose--Einstein statistics with $S=1$. The main difference between these two distributions is the value of $S$, which describes both the permutation symmetric properties and the indistinguishability induced bunching factor. In typical circumstances, particles are always interacting with other particles or the outer environment, and their quantum coherence may be lost. Thus, particles are in a mixed state and can be partially distinguishable. In this case, the value of $\left\vert S\right\vert $ should be between $0$ and $1$. By studying the photon indistinguishability induced bunching effect and photon statistics, we find that $S$ monotonously depends on the value of the indistinguishability. This result will fill the gap in the photon statistics between the indistinguishable case (Bose--Einstein statistics) and the totally distinguishable case (Poisson statistics). \section{Multi-photon indistinguishability and bunching effect} Without a loss of generality, we consider a multi-photon state from $N$ separated emitters, which can be described as \cite{Sun09} \begin{equation} \text{\textsl{$\rho $}}_{NPhoton}=C_{0}\bigotimes_{k=1}^{N}(\left\vert \text vac}\right\rangle \left\langle \text{vac}\right\vert +c_{k}\rho _{k})\text{,} \label{single} \end{equation where $\rho _{k}$ ($\mathrm{tr}\rho _{k}=1$) describes the quantum state of a single photon. $\left\vert \text{vac}\right\rangle $ is the vacuum state. C_{0}$ is a normalization constant and $c_{k}>0$ is a constant determined by the processes of photon generation and collection. For simplicity, we can set all $c_{k}=c$ and $\rho _{k}=\rho $ because all emitters are under the same environment during the photon generation process. A single photon might be in a mixed state, which can be spectrally decomposed as $\rho =\int_{-\infty }^{+\infty }\mathrm{d}\omega f(\omega )\left\vert \omega \right\rangle \left\langle \omega \right\vert $ \cite{Sun09}, with \left\vert \omega \right\rangle =\int_{-\infty }^{+\infty }\mathrm{d \upsilon g_{\omega }(\upsilon )a^{\dag }(\upsilon )\left\vert \text{vac \right\rangle $, where $a^{\dag }$ ($a$) is single photon creation (annihilation) operator. $|g_{\omega }(\upsilon )|^{2}$ ($\int_{-\infty }^{+\infty }|g_{\omega }(\upsilon )|^{2}\mathrm{d}\upsilon =1$) shows the spectrum of the transform limited pulse with a center frequency $\omega $ and a width $\sigma _{g}$, and $f(\omega )$ ($\int_{-\infty }^{+\infty }f(\omega )\mathrm{d}\omega =1$) is the distribution of a center frequency \omega _{c}$ with a width $\sigma _{f}$. To discuss the photon indistinguishability induced photon bunching effect and photon statistics, we can define the indistinguishability of $n$ photons as $K_{n}=\mathrm{tr}\rho ^{n}$, with $K_{2}\equiv K=\mathrm{tr}\rho ^{2}$ \cite{Sun09} and $K_{1}=\mathrm{tr}\rho =1$. Thus, when $\sigma _{f}=0$, the single-photon state is a pure state and photons are indistinguishable with K=1$ and $K_{n}=1$. However, because of interactions between single photons and the outer environment or other photons in the generation process with \sigma _{f}>0$, photons are partially distinguishable with $0<K_{n(n>1)}<1$. When $\sigma _{f}\gg \sigma _{g}$, the photons are totally distinguishable, with $K_{n}\longrightarrow 0$. When both $g_{\omega }(\upsilon )=\mathrm{e}^{-(\upsilon -\omega )^{2}/4\sigma _{g}^{2}}/\sqrt[4]{2\mathrm{\pi }\sigma _{g}^{2}}$ and f(\omega )=\mathrm{e}^{-(\omega -\omega _{c})^{2}/2\sigma _{f}^{2}}/\sqrt{ \mathrm{\pi }\sigma _{f}^{2}}$ are Gaussian functions with widths of $\sigma _{g}$ and $\sigma _{f}$, respectively, we can obtain $K=$ $\sigma _{g}/ \sqrt{\sigma _{g}^{2}+\sigma _{f}^{2}})$ \cite{Sun09}. In this case, $K_{n}$ can be analytically derived based on the value of $K$. Since \begin{equation} \left\langle \omega _{i}|\omega _{j}\right\rangle =\int\nolimits_{-\infty }^{+\infty }g_{\omega _{i}}^{\ast }(\upsilon )g_{\omega _{j}}(\upsilon \mathrm{d}\upsilon =\mathrm{e}^{-(\omega _{i}-\omega _{j})^{2}/8\sigma _{g}^{2}}\text{,} \end{equation the value of $K_{n}$ can be \begin{widetext} \begin{eqnarray} K_{n} &=&\int\nolimits_{-\infty }^{+\infty }\mathrm{d}\omega _{1}\mathrm{d \omega _{2}\cdots \mathrm{d}\omega _{n}f(\omega _{1})f(\omega _{1})\cdots f(\omega _{n})\left\langle \omega _{1}|\omega _{2}\right\rangle \left\langle \omega _{2}|\omega _{3}\right\rangle \cdots \left\langle \omega _{n}|\omega _{1}\right\rangle \notag \\ &=&\frac{1}{(2\mathrm{\pi }\sigma _{f}^{2})^{n/2}}\int\nolimits_{-\infty }^{+\infty }\mathrm{d}\omega _{1}\mathrm{d}\omega _{2}\cdots \mathrm{d \omega _{n}\exp [\sum\nolimits_{i=1}^{n}(-\frac{\omega _{i}^{2}}{2\sigma _{f}^{2}})]\times \exp [-\frac{(\omega _{1}-\omega _{2})^{2}+(\omega _{2}-\omega _{3})^{2}+\cdots +(\omega _{n}-\omega _{1})^{2}}{8\sigma _{g}^{2 }] \notag \\ &=&\frac{1}{(\sigma _{f}^{2})^{n/2}\sqrt{\det M_{n\times n}}}\text{,} \end{eqnarray \text{where the $n\times n$ matrix is} \begin{equation*} M_{n\times n}=\left[ \begin{array}{cccccc} \frac{1}{2\sigma _{f}^{2}}+\frac{1}{4\sigma _{g}^{2}} & -\frac{1}{8\sigma _{g}^{2}} & 0 & \cdots & 0 & -\frac{1}{8\sigma _{g}^{2}} \\ -\frac{1}{8\sigma _{g}^{2}} & \frac{1}{2\sigma _{f}^{2}}+\frac{1}{4\sigma _{g}^{2}} & -\frac{1}{8\sigma _{g}^{2}} & \cdots & 0 & 0 \\ 0 & -\frac{1}{8\sigma _{g}^{2}} & \frac{1}{2\sigma _{f}^{2}}+\frac{1} 4\sigma _{g}^{2}} & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & \frac{1}{2\sigma _{f}^{2}}+\frac{1}{4\sigma _{g}^{2}} & -\frac{1}{8\sigma _{g}^{2}} \\ -\frac{1}{8\sigma _{g}^{2}} & 0 & 0 & \cdots & -\frac{1}{8\sigma _{g}^{2}} & \frac{1}{2\sigma _{f}^{2}}+\frac{1}{4\sigma _{g}^{2} \end{array \right] \text{.} \end{equation* \end{widetext}In the above calculation, we simply set $\omega _{c}=0$ and applied the $n$-dimensional Gaussian integral with \begin{equation*} \int\nolimits_{-\infty }^{+\infty }\exp (-\frac{1}{2}\su \nolimits_{i,j=1}^{n}A_{i,j}x_{i}x_{j})\mathrm{d}^{n}x=\sqrt{\frac{(2\mathrm \pi })^{n}}{\det A}}\text{,} \end{equation* where $A$ is a symmetric positive-definite $n\times n$ matrix \cite{GMatrix . The first five terms are listed in Table.\ref{Tab}. \begin{table}[t] \caption{The values of multi-photon indistinguishability when spectral distributions ($g_{\protect\omega }(\protect\upsilon )$ and $f(\protec \omega )$) of photons are Gaussian.} \label{Tab}\centering\tabcolsep0.08in \begin{tabular}{cccccc} \hline\hline $n$ & $2$ & $3$ & $4$ & $5$ & $6$ \\ \hline $K_{n}$ & $K$ & $\frac{4K^{2}}{3+K^{2}}$ & $\frac{2K^{3}}{1+K^{2}}$ & $\frac 16K^{4}}{5+10K^{2}+K^{4}}$ & $\frac{16K^{5}}{3+10K^{2}+3K^{4}}$ \\ \hline\hline \end{tabular \end{table} \begin{figure}[tbp] \includegraphics[width=7cm]{Fig1.eps} \caption{(a) Multi-photon indistinguishability ($K_{n}$) shows exponential decay with the photon number ($n$) for different two-photon indistinguishabilities ($K$). The solid lines are the fittings with Eq. \protect\ref{Kn}). (b) Photon bunching factor ($S=\mathrm{e}^{-\protec \alpha (K)}$) versus the two-photon indistinguishability ($K$). (c) Photon bunching coefficient ($B_{n}$) with the photon number ($n$) for different two-photon indistinguishabilities ($K$). The solid lines are the fittings with Eq.(\protect\ref{BnK}). In the calculation, the spectral distributions $g_{\protect\omega }(\protect\upsilon )$ and $f(\protect\omega )$) of the photons are Gaussian. } \label{Fig2} \end{figure} Fig. \ref{Fig2}(a) shows that the value of $K_{n}$ decays with an increase in the photon numbers. We find that $K_{n}$ ($n\gg 1$) can be well fitted by \begin{equation} K_{n}(K)=\mathrm{e}^{-\alpha (K)n} \label{Kn} \end{equation with a decay rate of $\alpha (K)$. Also, we can find that K_{n+m}(K)=K_{n}(K)\times K_{m}(K)$. The value of $\mathrm{e}^{-\alpha (K)}$ is also shown in Fig.\ref{Fig2} (b). When $K=1$, $\mathrm{e}^{-\alpha (K)}=1 . Additionally, when $K\longrightarrow 0$, $\mathrm{e}^{-\alpha (K)}\longrightarrow 0$. Because the nonzero $K$ will induce photon bunching, the photon number distribution of $\rho _{NPhoton}$ strongly depends on the value of K_{n(n>1)}$. Formally, the photon state in Eq.(\ref{single}) can be re-written as \begin{equation} \rho _{NPhoton}=C\sum_{n=0}^{N}\binom{N}{n}B_{n}c^{n}\{n\}\text{,} \end{equation where $C$ is a new normalization constant and $\{n\}$ describes the state with the photon number of $n$. $B_{n}$ is an indistinguishability ( K_{n(n>1)}>0$) induced photon bunching coefficient. \begin{figure}[tbp] \includegraphics[width=7.5cm]{Fig2.eps} \caption{Six ($3!=6$) permutations of three photons for the calculation of three-photon indistinguishability.} \label{Fig1} \end{figure} Principally, the Bosonic permutation symmetry induces the photon bunching effect \cite{Sun07PRA}. Here we apply the permutation of $n$ photons to obtain the photon bunching coefficient of an $n$-photon state, which can be described as \begin{equation} B_{n}=\sum\limits_{k=2}^{n}D_{n,n-k}K_{k}+1\text{,} \end{equation where $D_{n,n-k}=\frac{n!}{(n-k)!}\sum\nolimits_{i=2}^{k}(-1)^{i}/i!$ are rencontres numbers, which show the number of permutations of $n$ photons with $(n-k)$ fixed photons without permutation. Fig.\ref{Fig1} illustrates the number of permutations of three photons of $3!=6$, with $D_{3,3}=1$, D_{3,2}=0$, $D_{3,1}=3$, and $D_{3,0}=2$. Thus, for totally distinguishable states with $K=0$, when $n>1$, $K_{n}=0$ and $B_{n}=1$. For indistinguishable states, $K_{n}=1$, $B_{n}=\su \nolimits_{k=2}^{n}D_{n,n-k}+1=n!$ shows an $n$-photon bunching result and \{n\}=\left\vert n\right\rangle \left\langle n\right\vert $ is an $n$-photon Fock state. For partially indistinguishable photons, $1<B_{n}<n!$. When n\gg 1$, \begin{equation} \frac{B_{n+1}(K)/(n+1)!}{B_{n}(K)/n!}\rightarrow \frac{K_{n+1}}{K_{n}} \mathrm{e}^{-\alpha (K)}\text{,} \label{Bn} \end{equation $B_{n}(K)/n!$ also shows an exponential decay with a photon number with a decay rate of $\alpha (K)$. For the photon state with Gaussian spectral distributions and $n\gg 1$, \begin{equation} B_{n}(K)=n!\mathrm{e}^{-\alpha (K)(n-1)}\text{,} \label{BnK} \end{equation which is shown in Fig.\ref{Fig2} (c). \section{Photon distribution of partially indistinguishable photons} For totally distinguishable states, $B_{n}=1$, photon bunching does not occur and $\rho _{NPhoton}$ shows a classical state with a binomial distribution. When $N\gg 1$, the binomial distribution converts to Poisson statistics \cite{OCQO}. For all indistinguishable states with $K_{n}=1$ and B_{n}=n!$, the photon number distribution of Eq.(\ref{single}) is \begin{equation} \rho _{NPhoton}\simeq (1-Nc)\sum_{n=0}^{N}(Nc)^{n}\left\vert n\right\rangle \left\langle n\right\vert =\sum_{n=0}^{N}P_{n}\left\vert n\right\rangle \left\langle n\right\vert \text{,} \end{equation when $Nc<1$ and $N\gg 1$. It can be described by the Bose--Einstein statistics with \begin{equation} P_{n}=\frac{\bar{n}^{n}}{(1+\bar{n})^{n+1}}=P\frac{\mathrm{e}^{-n\varepsilon /k_{B}T}}{\mathrm{e}^{\varepsilon /k_{B}T}-1}\text{,} \label{BE} \end{equation where $Nc=\mathrm{e}^{-\varepsilon /k_{B}T}$, $P=\mathrm{e}^{\varepsilon /k_{B}T}+\mathrm{e}^{-\varepsilon /k_{B}T}-2$ and $\bar{n}=Nc/(1-Nc)=1/ \mathrm{e}^{\varepsilon /k_{B}T}-1)$ is the mean photon number. However, for photons with partial indistinguishability ($0<K_{n}<1$), the photon state should be \begin{eqnarray} \rho _{NPhoton} &\simeq &(1-Nc\mathrm{e}^{-\alpha (K)})\sum_{n=0}^{N}(N \mathrm{e}^{-\alpha (K)})^{n}\{n\} \notag \\ &=&\sum_{n=0}^{N}P_{n}(K)\{n\} \text{.} \label{Pn} \end{eqnarray When $Nc<1$ and $N\gg 1$, a modified Bose--Einstein statistics can be presented as \begin{equation} P_{n}(K)=P(K)\frac{\mathrm{e}^{-n[\varepsilon /k_{B}T+\alpha (K)]}}{\mathrm{ }^{\varepsilon /k_{B}T}-S}\text{,} \label{MBE} \end{equation where $P(K)=\mathrm{e}^{\varepsilon /k_{B}T}+\mathrm{e}^{-\varepsilon /k_{B}T-2\alpha (K)}-2\mathrm{e}^{-\alpha (K)}$, and the mean photon number is $\bar{n}=Nc\mathrm{e}^{-\alpha (K)}/(1-Nc\mathrm{e}^{-\alpha (K)})=1/ \mathrm{e}^{\varepsilon /k_{B}T+\alpha (K)}-1)$. Here, $S=\mathrm{e ^{-\alpha (K)}$ is an indistinguishability induced photon bunching factor. Without changing $N$ and $c$, the statistics is modified from the Bose--Einstein statistics in Eq.(\ref{BE}) via $S$, with $S=1$ for indistinguishable case ($K=1$) and $S=0$ for the totally distinguishable case ($K=0 $). The results clearly demonstrates the important role of indistinguishability in photon statistics. \section{Indistinguishability induced photon bunching and statistical transition} Because photons are Bosons, statistical transition can occur when more than one photon occurs in a single mode, which results from the indistinguishability induced photon bunching effect. Here, we apply the second-order degree of coherence $g^{(2)}(0)$) to evaluate the photon statistical transition. For the single photon state in Eq.(\ref{single}), $c$ describes the photon emission probability from an emitter and $Nc$ is the number of photons from $N$ emitters without photon bunching. When $Nc\ll 1$, $g^{(2)}(0)=1+K$. However, when $Nc\gg 1$ and $K>0$, more than one photon occurs in the emission mode and the bunching effect from the indistinguishable multi-photon state dominates the quantum statistics, as shown in Eq.(\ref{Bn}). This finding demonstrates that photons condensate into an $n$-photon Fock state with g^{(2)}(0)\rightarrow 1$ when $n\gg 1$. Fig.\ref{Fig3} (a) shows the values of $g^{(2)}(0)$ with different values of $K$ and the behavior of photon statistical transitions from $g^{(2)}(0)=1+K$ to $g^{(2)}(0)\rightarrow 1$ with an increase in the photon number $Nc$. For indistinguishable photon state with Bose--Einstein statistics, the transition occurs at $Nc=1$. We found that with lower $K$ values, a higher photon number $Nc$ is required to make the transition. This finding indicates that photon indistinguishability induced photon bunching effect is a key contribution to the transition. Eq. \ref{Pn}) shows that, the transition points should occur approximately at Nc=1/S$. \begin{figure}[tbp] \includegraphics[width=7.5cm]{Fig3.eps} \caption{(a) Second-order degree of coherence ($g^{(2)}(0)$) versus $Nc$. (b) Mean photon number versus $Nc$. From left to right, the ten curves in each figure correspond to two-photon indistinguishabilities ($K$) from $1$ to $0.1$. In the calculation, $N=1000$ and the spectral distributions ($g_ \protect\omega }(\protect\upsilon )$ and $f(\protect\omega )$) of the photons are Gaussian.} \label{Fig3} \end{figure} Such a transition can also be demonstrated by the mean photon numbers ($\bar n}$) in Fig.\ref{Fig3} (b). At a low emission rate ($Nc\ll 1$), spontaneous emission dominates and $\bar{n}$ increases slowly with $Nc$ for different photon indistinguishabilities. However, when $Nc>1/S$, $\bar{n}$ increases quickly with $Nc$ because the bunching effect from indistinguishable photons induces stimulated emission \cite{Sun07} and dominates the photon statistics. Higher $K$ values correspond to a higher increase rate. When Nc\gg 1/S$, $\bar{n}\rightarrow N$, which demonstrates saturation. Fig.\re {Fig3} shows that, although the photon emission in Eq.(\ref{single}) lacks phase coherence, the photon indistinguishability exhibits the same role as in the generation of laser \cite{Abmann}. \section{Discussion and conclusion} The multi-photon interference is essential for optical quantum information processes. In addition to phase modulation, the photon indistinguishability induced bunching effect is a key parameter in multi-photon interference. Defining and calculating multi-photon indistinguishability ($K_{n}$) are key elements in the analysis of multi-photon interference \cit {Sun07,Xiang06,RaNC} and optical quantum information processes. Because multi-photon indistinguishability shows an exponential decay with increases in the photon numbers, such an imperfect indistinguishability is the reason for the exponential decay in the fidelity of multi-photon entangled state and the visibility of multi-photon interference \cite{Huang,Wang16}. Especially in recently developed boson sampling \cite{Broome,Spring,Tillmann,Bentivegna15,Tillmann15} and quantum metrology \cite{Vittorio11,Nagata07,SUNEPL,Xiang11} with entangled photon number state, many photons interfere in a same spatial mode. Imperfect interference with partially indistinguishable photons highly decreases the fidelity of quantum computation and the resolution of the quantum metrology. Defining multi-photon indistinguishability provides important insights on these issues. In conclusion, we have presented the definition of the indistinguishability of multi-photon states. Based on the multi-photon emission model, we discussed the indistinguishability induced bunching effect in photon statistical behavior. The photon statistical distribution can be changed from a classical Poisson distribution to Bose--Einstein statistics when the multi-photon indistinguishability is increased from $0$ to $1$. A modified Bose--Einstein statistics is presented for partially indistinguishable photons with an indistinguishability induced photon bunching factor \cit {note}. In addition to its influence on photon statistical behavior, multi-photon indistinguishability is a key parameter in multi-photon interference for optical quantum information techniques and in the generation of laser and superluminescence. \section*{Acknowledgment} This work is supported by the National Key Research and Development Program of China (No. 2017YFA0304504), the National Natural Science Foundation of China (Nos. 11374290, 91536219, 61522508, 11504363).
1904.00219
\section{Prologue} \label{sec:pro} There is little argument that the study of factorization properties of rings and integral domains was a driving force in the early development of commutative algebra. Most of this work centered on determining when an algebraic structure has ``nice'' factorization properties (i.e., what today has been deemed a unique factorization domain (or UFD). It was not until the appearance of papers in the 1970s and 1980s by Skula~\cite{Sk76}, Zaks~\cite{Z80}, Narkiewicz~\cite{N71, N79}, Halter-Koch~\cite{HK84}, and Valenza\footnote{While Valenza's paper appeared in 1990, it was actually submitted 10 years earlier.}~\cite{V90} that there emerged interest in studying the deviation of an algebraic object from the UFD condition. Implicit in much of this work is the realization that problems involving factorizations of elements in a ring or integral domain are merely problems involving the multiplicative semigroup of the object in question. Hence, until the early part of the 21st century, many papers studying non-unique factorization were written from a purely multiplicative point of view (which to a large extent covered Krull domains and monoids). This changed with the appearance of~\cite{BCKR} and~\cite{CHM}, both of which view factorization problems in additive submonoids of the natural numbers known as \textit{numerical monoids}. These two papers generated a flood of work in this area, from both the pure \cite{ACHP07, CCMMP17, CDHK10, CGL09, CHK09, CKLNZ14, CK17, OP18} and the computational \cite{BOP17b, ACKT11, GGMV2, GOW19} points of view. Over the past three years, similar studies have emerged for additive submonoids of the nonnegative rational numbers, also known as \textit{Puiseux monoids} \cite{fG19a, fG17, fG18, fG19b, GG17, GO17}. The purpose of our work here is to highlight a class of Puiseux monoids with extremely nice factorization properties. This is in line with the earlier work done for numerical monoids. Indeed, several of the papers we have already cited are dedicated to showing that while general numerical monoids have complicated factorization properties, those that are generated by an arithmetic sequence have very predictable factorization invariants (see \cite{ACHP07, BCKR, CCMMP17, CGL09}). After fixing a positive rational $r>0$, we will study the additive submonoid of~$\mathbb{Q}_{\ge 0}$ generated by the set $\{r^n \mid n \in \mathbb{N}_0 \}$. We denote this monoid by $S_r$, that is, $S_r := \langle r^n \mid n \in \mathbb{N}_0 \rangle$ (cf. Definition \ref{semiring}). Observe that $S_r$ is also closed under multiplication and, therefore, it is a \textit{semiring}. Moreover, the semiring $S_r$ is \emph{cyclic}, which means that $S_r$ is generated as a semiring by only one element, namely $r$. We emphasize that when dealing with $S_r$, we will only be interested in factorizations with regard to its additive operation. However, we will use the term ``rational cyclic semiring" throughout this paper to represent the longer term ``Puiseux monoid generated by a geometric sequence." We break our work into five sections. Our paper is self-contained and all necessary background and definitions can be found in Section~\ref{sec:bfactsdefs}. In Section~\ref{sec:set of length} we completely describe the structure of the sets of lengths in $S_r$, showing that such sets are always arithmetic progressions (Theorem~\ref{thm:sets of lengths}). In Section~\ref{sec:elasticity} we investigate the elasticity of~$S_r$ (Corollary \ref{thm:sets of lengths}) and explore in Propositions~\ref{accepted},~\ref{prop:set of elasticities}, and~\ref{local} the notions of accepted, full, and local elasticity. Finally, in Section~\ref{sec:tame degree} we study the omega primality of $S_r$ (Proposition \ref{prop:omega primality}), and use it to characterize the semirings $S_r$ that are locally and globally tame (Theorem \ref{thm:cyclic semirings are no locally tame}). \section{Basic Facts and Definitions} \label{sec:bfactsdefs} In this section we review some of the standard concepts we shall be using later. The book~\cite{pG01} by Grillet provides a nice introduction to commutative monoids while the book~\cite{GH06} by Geroldinger and Halter-Koch offers extensive background in non-unique factorization theory of commutative domains and monoids. Throughout our exposition, we let $\mathbb{N}$ denote the set of positive integers, and we set $\mathbb{N}_0 := \mathbb{N} \cup \{0\}$. For $a, b \in \mathbb{R} \cup \{\pm \infty \}$, let \[ \llb a, b \rrb = \{ x \in \mathbb{Z} \colon a \le x \le b \} \] be the discrete interval between $a$ and $b$. In addition, for $X \subseteq \mathbb{R}$ and $r \in \mathbb{R}$, we set \[ X_{> r} := \{x \in X \mid x > r\}, \] and we use the notations $X_{< r}$ and $X_{\ge r}$ in a similar manner. If $q \in \mathbb{Q}_{> 0}$, then we call the unique $a,b \in \mathbb{N}$ such that $q = a/b$ and $\gcd(a,b)=1$ the \emph{numerator} and \emph{denominator} of $q$ and denote them by $\mathsf{n}(q)$ and $\mathsf{d}(q)$, respectively. \subsection{Atomic Monoids} The unadorned term \emph{monoid} always means commutative cancellative semigroup with identity and, unless otherwise specified, each monoid here is written additively. A monoid is called \emph{reduced} if its only unit (i.e., invertible element) is $0$. For a monoid~$M$, we let $M^\bullet$ denote the set $M \! \setminus \! \{0\}$. For the remainder of this section, let $M$ be a reduced monoid. For $x,w \in M$, we say that $x$ \emph{divides} $w$ \emph{in} $M$ and write $x \mid_M w$ provided that $w = x + y$ for some $y \in M$. An element $p \in M^\bullet$ is called \emph{prime} if $p \mid_M x+y$ for some $x,y \in M$ implies that either $p \mid_M x$ or $p \mid_M y$. For $S \subseteq M$ we write $M = \langle S \rangle$ when $M$ is generated by $S$, that is, no submonoid of $M$ strictly contained in $M$ contains $S$. We say that $M$ is \emph{finitely generated} if it can be generated by a finite set. An element $a \in M^\bullet$ is called an \emph{atom} provided that for each pair of elements $x,y \in M$ such that $a = x+y$ either $x=0$ or $y=0$. It is not hard to verify that every prime element is an atom. The set of atoms of $M$ is denoted by $\mathcal{A}(M)$. Clearly, every generating set of $M$ must contain $\mathcal{A}(M)$. If $\mathcal{A}(M)$ generates $M$, then $M$ is called \emph{atomic}. On the other hand, $M$ is called \emph{antimatter} when $\mathcal{A}(M)$ is empty. Every submonoid $N$ of $(\mathbb{N}_0,+)$ is finitely generated and atomic. Since $N$ is reduced, $\mathcal{A}(N)$ is the unique minimal generating set of $N$. When $\mathbb{N}_0 \setminus N$ is finite, $N$ is called \emph{numerical monoid}. It is not hard to check that every submonoid of $(\mathbb{N}_0,+)$ is isomorphic to a numerical monoid. If $N$ is a numerical monoid, then the \emph{Frobenius number} of $N$, denoted by $\mathcal{F}(N)$, is the largest element in $\mathbb{N}_0 \setminus N$. For an introduction to numerical monoids see~\cite{GR09} and for some of their many applications see~\cite{AG16}. A submonoid of $(\mathbb{Q}_{\ge 0},+)$ is called a \emph{Puiseux monoid}. In particular, every numerical monoid is a Puiseux monoid. However, Puiseux monoids might not be finitely generated nor atomic. For instance, $\langle 1/2^n \mid n \in \mathbb{N}\rangle$ is a non-finitely generated Puiseux monoid with empty set of atoms. A Puiseux monoid is finitely generated if and only if it is isomorphic to a numerical monoid~\cite[Proposition~3.2]{fG17}. On the other hand, a Puiseux monoid $M$ is atomic provided that $M^\bullet$ does not have $0$ as a limit point~\cite[Theorem~3.10]{fG17} (cf. Proposition~\ref{prop:BF sufficient condition}). \subsection{Factorization Invariants} The \emph{factorization monoid} of $M$ is the free commutative monoid on $\mathcal{A}(M)$ and is denoted by $\mathsf{Z}(M)$. The elements of $\mathsf{Z}(M)$ are called \emph{factorizations}. If $z = a_1 \dots a_n$ is a factorization of $M$ for some $a_1, \dots, a_n \in \mathcal{A}(M)$, then $n$ is called the \emph{length} of $z$ and is denoted by $|z|$. The unique monoid homomorphism $\phi \colon \mathsf{Z}(M) \to M$ satisfying $\phi(a) = a$ for all $a \in \mathcal{A}(M)$ is called the \emph{factorization homomorphism} of $M$. For each $x \in M$ the set \[ \mathsf{Z}(x) := \phi^{-1}(x) \subseteq \mathsf{Z}(M) \] is called the \emph{set of factorizations} of $x$, while the set \[ \mathsf{L}(x) := \{|z| : z \in \mathsf{Z}(x)\} \] is called the \emph{set of lengths} of $x$. If $\mathsf{L}(x)$ is a finite set for all $x \in M$, then $M$ is called a \emph{BF-monoid}. The following proposition gives a sufficient condition for a Puiseux monoid to be a BF-monoid. \begin{prop} \cite[Proposition~4.5]{fG19a} \label{prop:BF sufficient condition} Let $M$ be a Puiseux monoid. If $0$ is not a limit point of $M^\bullet$, then $M$ is a BF-monoid. \end{prop} \noindent The \emph{system of sets of lengths} of $M$ is defined by \[ \mathcal{L}(M) := \{\mathsf{L}(x) \mid x \in M\}. \] The system of sets of lengths of numerical monoids has been studied in~\cite{ACHP07} and~\cite{GS18}, while the system of sets of lengths of Puiseux monoids was first studied in~\cite{fG19b}. In addition, a friendly introduction to sets of lengths and the role they play in factorization theory is surveyed in~\cite{aG16}. If $M$ is a BF-monoid and for each nonempty subset $S \subseteq \mathbb{N}_{\geq 2}$ there exists $x \in M$ with $\mathsf{L}(x) = S$, then we say that $M$ has the \emph{Kainrath property} (see \cite{K}). In a monoid with the Kainrath property, all possible sets of lengths are obtained. For $x \in M^\bullet$, a positive integer $d$ is said to be a \emph{distance} of $x$ provided that the equality $\mathsf{L}(x) \cap \llb \ell, \ell + d \rrb = \{\ell, \ell + d\}$ holds for some $\ell \in \mathsf{L}(x)$. The set consisting of all the distances of $x$ is denoted by $\Delta(x)$ and called the \emph{delta set} of $x$. In addition, the set \[ \Delta(M) := \bigcup_{x \in M^\bullet} \Delta(x) \] is called the \emph{delta set} of the monoid $M$. The delta set of numerical monoids has been studied by the first author \emph{et al.} (see~\cite{BCKR, CKLNZ14} and references therein). For two factorizations $z = \sum_{a \in \mathcal{A}(M)} \mu_a a$ and $z' = \sum_{a \in \mathcal{A}(M)} \nu_a a$ in $\mathsf{Z}(M)$, we set \[ \gcd(z,z') := \sum_{a \in \mathcal{A}(M)} \min\{\mu_a, \nu_a\} a, \] and we call the factorization $\gcd(z,z')$ the \emph{greatest common divisor} of $z$ and $z'$. In addition, we call \[ \mathsf{d}(z,z') := \max \big\{ |z| - |\gcd(z,z')|, |z'| - |\gcd(z,z')| \big\} \] the \emph{distance} between $z$ to $z'$ in $\mathsf{Z}(M)$. For $N \in \mathbb{N}_0 \cup \{\infty\}$, a finite sequence $z_0, z_1, \dots, z_k$ in $\mathsf{Z}(x)$ is called an \emph{$N$-chain of factorizations} connecting $z$ and $z'$ if $z_0 = z$, $z_k = z'$, and $\mathsf{d}(z_{i-1}, z_i) \le N$ for $i \in \llb 1, k \rrb$. For $x \in M$, let $\mathsf{c}(x)$ denote the smallest $n \in \mathbb{N}_0 \cup \{\infty\}$ such that for any two factorizations in $\mathsf{Z}(x)$ there exists an $n$-chain of factorizations connecting them. We call $\mathsf{c}(x)$ the \emph{catenary degree} of $x$ and we call \[ \mathsf{c}(M) := \sup \{ \mathsf{c}(x) \mid x \in M \} \in \mathbb{N}_0 \cup \{ \infty \} \] the \emph{catenary degree} of $M$. In addition, the set \[ \mathsf{Ca}(M) := \{\mathsf{c}(x) \mid x \in M \ \text{and} \ \mathsf{c}(x) > 0\} \] is called the \emph{set of positive catenary degrees}. Recent studies of the catenary degree of numerical monoids can be found in~\cite{CCMMP17} and~\cite{OP18}. We offer the reader in Tables~\ref{Table 1} and~\ref{Table 2} a comparison of the known factorization properties between general numerical monoids and Puiseux monoids. Table~\ref{Table 1} considers traditionally global factorization properties whose roots reach back into commutative algebra. Table~\ref{Table 2} considers the computation of factorization invariants which have become increasingly popular over the past 20 years. Definitions related to the omega invariant and the tame degree can be found in Section~\ref{sec:tame degree}. {\footnotesize \renewcommand{\arraystretch}{1.5} \begin{table}[t] \caption{Monoidal Factorization Properties: Numerical vs. Puiseux Monoids}\label{Table 1} \begin{tabular}{ | p{7cm} | p{7cm} |} \hline \rowcolor{bleudefrance} \rule{0pt}{20pt} \textbf{Let $N$ be a numerical monoid} & \textbf{Let $M$ be a Puiseux monoid} \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Is it finitely generated?}\\ \hline Always. & Not always: $M$ is finitely generated if and only if $M$ is isomorphic to a numerical monoid \cite[Prop.~3.2]{fG17}. \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Is it atomic?}\\ \hline Always. & Not always: $\langle 1/2^n \mid n \in \mathbb{N} \rangle$ is not atomic. $M$ is atomic if $0$ is not a limit point of $M$ \cite[Thm.~3.10]{fG17}. \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Is it a BF-monoid (BFM)?}\\ \hline Always \cite[Prop. 2.7.8]{GH06}. & Not always: $M$ can be atomic and not a BFM \cite[Ex.~5.7]{fG19a}. \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Is it an FF-monoid (FFM)?}\\ \hline Always \cite[Prop. 2.7.8]{GH06}. & Not always: $M$ can be a BFM and not an FFM \cite[Ex.~4.9]{GO17}. \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Is it a Krull monoid?}\\ \hline Not always: $N$ is a Krull monoid if and only if $N$ is isomorphic to $(\mathbb{N}_0, +)$ \cite[Thm.~5.5\,(2)]{GSZ17}. & Not always: $M$ is a Krull monoid if and only if $M$ is isomorphic to $(\mathbb{N}_0, +)$ \cite[Thm.~6.6]{fG18}. \\ \hline \end{tabular} \end{table} \begin{table}[t] \caption{Monoidal Factorization Invariants: Numerical vs. Puiseux Monoids}\label{Table 2} \begin{tabular}{ | p{7cm} | p{7cm} |} \hline \rowcolor{bleudefrance} \rule{0pt}{20pt} \textbf{Let $N$ be a numerical monoid} & \textbf{Let $M$ be a Puiseux monoid} \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{System of sets of lengths}\\ \hline Sets of lengths in $N$ are almost arithmetic progressions \cite[Thm.~4.3.6]{GH06}. Also, for $L \subseteq \mathbb{N}_{\ge 2}$, there is a numerical monoid $N$ and $x \in N$ with $\mathsf{L}(x) = L$ \cite[Thm.~3.3]{GS18}. & Sets of lengths can have arbitrary behavior as there exists a Puiseux monoid satisfying the Kainrath property \cite[Thm.~3.6]{fG19b}. \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Elasticity}\\ \hline $\rho(N) = \frac{\max {\mathcal{A}(N)}}{\min {\mathcal{A}(N)}}$ is always finite and accepted \cite[Thm. 2.1]{CHM}. Moreover, $N$ is fully elastic if and only if $N$ is isomorphic to $(\mathbb{N}_0, +)$ \cite[Thm.~2.2]{CHM}. & If $M$ is atomic, then $\rho(M) = \infty$ if $0$ is a limit point of $\mathcal{A}(M)$ and $\rho(M) = \frac{\sup {\mathcal{A}(M)}}{\inf {\mathcal{A}(M)}}$ otherwise \cite[Thm.~3.2]{GO17}. Moreover, $\rho(M)$ is accepted if and only if $\mathcal{A}(M)$ has a minimum and a maximum in $\mathbb{Q}$ \cite[Thm.~3.4]{GO17}. \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Catenary degree}\\ \hline $\mathsf{c}(N) \le \frac{\mathcal{F}(N) + \max{\mathcal{A}(N)}}{\min{\mathcal{A}(N)}} + 1$ \cite[Ex. 3.1.6]{GH06}. & No known general results.\\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Tame degree}\\ \hline Always globally tame (and, consequently, locally tame) \cite[Thm. 3.1.4]{GH06}. & Not always locally tame (see Theorem~\ref{thm:cyclic semirings are no locally tame}). \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{Omega primality}\\ \hline $\omega(N) < \infty$ always. & $\omega(S_r) = \infty$ when $r \in \mathbb{Q} \cap (0,1)$ and $\mathsf{n}(r) > 1$ (see Theorem~\ref{thm:cyclic semirings are no locally tame}).\\ \hline \end{tabular} \end{table} } \medskip \subsection{Cyclic Rational Semirings} As mentioned in the introduction, in this paper we study factorization invariants of those Puiseux monoids that are generated as a semiring by a single element. \begin{definition}\label{semiring} For $r \in \mathbb{Q}_{>0}$, we call \emph{cyclic rational semiring} to the Puiseux monoid~$S_r$ additively generated by the nonnegative powers of $r$, i.e., $S_r = \big\langle r^n \mid n \in \mathbb{N}_0 \rangle$. \end{definition} Although no systematic study of the factorization of cyclic rational semirings has been carried out so far, in~\cite{GG17} the atomicity of $S_r$ was first considered and classified in terms of the parameter $r$, as the next result indicates. \begin{theorem} \cite[Theorem~6.2]{GG17}\label{thm:atomic classification of multiplicative cyclic Puiseux monoids} For $r \in \mathbb{Q}_{> 0}$, let $S_r$ be the cyclic rational semiring generated by $r$. Then the following statements hold. \begin{enumerate} \item If $\mathsf{d}(r)=1$, then $S_r$ is atomic with $\mathcal{A}(S_r) = \{1\}$. \vspace{3pt} \item If $\mathsf{d}(r) > 1$ and $\mathsf{n}(r) = 1$, then $S_r$ is antimatter. \vspace{3pt} \item If $\mathsf{d}(r) > 1$ and $\mathsf{n}(r) > 1$, then $S_r$ is atomic with $\mathcal{A}(S_r) = \{r^n \mid n \in \mathbb{N}_0\}$. \end{enumerate} \end{theorem} As a consequence of Theorem~\ref{thm:atomic classification of multiplicative cyclic Puiseux monoids}, the monoid $S_r$ is atomic precisely when $r \in \mathbb{Q}_{> 0}$ and either $r = 1$ or $\mathsf{n}(r) > 1$. \bigskip \section{Sets of Lengths Are Arithmetic Sequences} \label{sec:set of length} In this section we show that the set of lengths of each element in an atomic rational cyclic semiring $S_r$ is an arithmetic sequence. First, we describe the minimum-length and maximum-length factorizations for elements of $S_r$. We start with the case where $0 < r < 1$. \begin{lemma} \label{lem:factorization of extremal length II} Take $r \in (0,1) \cap \mathbb{Q}$ such that $S_r$ is atomic, and for $x \in S_r^\bullet$ consider the factorization $z = \sum_{i=0}^N \alpha_i r^i \in \mathsf{Z}(x)$, where $N \in \mathbb{N}$ and $\alpha_0, \dots, \alpha_N \in \mathbb{N}_0$. The following statements~hold. \begin{enumerate} \item $\min \mathsf{L}(x) = |z|$ if and only if $\alpha_i < \mathsf{d}(r)$ for $i \in \llb 1, N \rrb$. \vspace{3pt} \item There exists exactly one factorization in $\mathsf{Z}(x)$ of minimum length. \vspace{3pt} \item $\sup \mathsf{L}(x) = \infty$ if and only if $\alpha_i \ge \mathsf{n}(r)$ for some $i \in \llb 0, N \rrb$. \vspace{3pt} \item $|\mathsf{Z}(x)| = 1$ if and only if $|\mathsf{L}(x)| = 1$, in which case, $\alpha_i < \mathsf{n}(r)$ for $i \in \llb 0, N \rrb$. \end{enumerate} \end{lemma} \begin{proof} To verify the direct implication of~(1), we only need to observe that if $\alpha_i \ge \mathsf{d}(r)$ for some $i \in \llb 1,N \rrb$, then the identity $\alpha_i r^i = (\alpha_i - \mathsf{d}(r))r^i + \mathsf{n}(r)r^{i-1}$ would yield a factorization $z'$ in $\mathsf{Z}(x)$ with $|z'| < |z|$. To prove the reverse implication, suppose that $w := \sum_{i=0}^K \beta_i r^i \in \mathsf{Z}(x)$ has minimum length. By the implication already proved, $\beta_i < \mathsf{d}(r)$ for $i \in \llb 1, N \rrb$. Insert zero coefficients if necessary and assume that $K = N$. Suppose, by way of contradiction, that there exists $m \in \llb 1,N \rrb$ such that $\beta_m \neq \alpha_m$ and assume that such index $m$ is as large as possible. Since $z,w \in \mathsf{Z}(x)$ we can write \[ (\alpha_m - \beta_m)r^m = \sum_{i=0}^{m-1} (\beta_i - \alpha_i) r^i. \] After multiplying the above equality by $\mathsf{d}(r)^m$, it is easy to see that $\mathsf{d}(r) \mid \alpha_m - \beta_m$, which contradicts the fact that $0 < |\alpha_m - \beta_m| \le \mathsf{d}(r)$. Hence $\beta_i = \alpha_i$ for $i \in \llb 0, N \rrb$ and, therefore, $w = z$. As a result, $|z| = |w| = \min \mathsf{L}(x)$. In particular, there exists only one factorization in $\mathsf{Z}(x)$ having minimum length, and~(2) follows. For the direct implication of~(3), take a factorization $w = \sum_{i=0}^N \beta_i r^i \in \mathsf{Z}(x)$ whose length is not the minimum of $\mathsf{L}(x)$; such a factorization exists because $\sup \mathsf{L}(x) =~\infty$. By part~(1), there exists $i \in \llb 1, N \rrb$ such that $\beta_i \ge \mathsf{d}(r)$. Now we can use the identity $\beta_i r^i = (\beta_i - \mathsf{d}(r))r^i + \mathsf{n}(r)r^{i-1}$ to obtain $w_1 \in \mathsf{Z}(x)$ with $|w_1| < |w|$. Notice that there is an atom (namely $r^{i-1}$) appearing at least $\mathsf{n}(r)$ times in $w_1$. In a similar way we can obtain factorizations $w = w_0, w_1, \dots, w_n$ in $\mathsf{Z}(x)$, where $w_n =: \sum_{i=0}^N \beta'_i r^i \in \mathsf{Z}(x)$ satisfies $\beta_i' < \mathsf{d}(r)$ for $i \in \llb 1, N \rrb$. By~(1) we have that $w_n$ is a factorization of minimum length and, therefore, $z = w_n$ by~(2). Hence $\alpha_i \ge \mathsf{n}(r)$ for some $i \in \llb 0,N \rrb$, as desired. For the reverse implication, it suffices to note that given a factorization $w = \sum_{i=0}^N \beta_i r^i \in \mathsf{Z}(x)$ with $\beta_i \ge \mathsf{n}(r)$ we can use the identity $\beta_i r^i = (\beta_i - \mathsf{n}(r)) r^i + \mathsf{d}(r) r^{i+1}$ to obtain another factorization $w' = \sum_{i=0}^{N+1} \beta'_i r^i \in \mathsf{Z}(x)$ (perhaps $\beta'_{N+1} = 0$) with $|w'| > |w|$ and satisfying $\beta_{i+1} > \mathsf{n}(r)$. Finally, we argue the reverse implication of~(4) as the direct implication is trivial. To do this, assume that $\mathsf{L}(x)$ is a singleton. Then each factorization of $x$ has minimum length. By~(2) there exists exactly one factorization of minimum length in~$\mathsf{Z}(x)$. Thus, $\mathsf{Z}(x)$ is also a singleton. The last statement of~(4) is straightforward. \end{proof} We continue with the case of $r>1$. \begin{lemma} \label{lem:factorization of extremal length I} Take $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$ such that $S_r$ is atomic, and for $x \in S_r^\bullet$ consider the factorization $z = \sum_{i=0}^N \alpha_i r^i \in \mathsf{Z}(x)$, where $N \in \mathbb{N}$ and $\alpha_0, \dots, \alpha_N \in \mathbb{N}_0$. The following statements hold. \begin{enumerate} \item $\min \mathsf{L}(x) = |z|$ if and only if $\alpha_i < \mathsf{n}(r)$ for $i \in \llb 0, N \rrb$. \vspace{3pt} \item There exists exactly one factorization in $\mathsf{Z}(x)$ of minimum length. \vspace{3pt} \item $\max \mathsf{L}(x) = |z|$ if and only if $\alpha_i < \mathsf{d}(r)$ for $i \in \llb 1, N \rrb$. \vspace{3pt} \item There exists exactly one factorization in $\mathsf{Z}(x)$ of maximum length. \vspace{3pt} \item $|\mathsf{Z}(x)| = 1$ if and only if $|\mathsf{L}(x)| = 1$, in which case $\alpha_0 < \mathsf{n}(r)$ and $\alpha_i < \mathsf{d}(r)$ for $i \in \llb 1, N \rrb$. \end{enumerate} \end{lemma} \begin{proof} To argue the direct implication of~(1) it suffices to note that if $\alpha_i \ge \mathsf{n}(r)$ for some $i \in \llb 0, N \rrb$, then we can use the identity $\alpha_i r^i = (\alpha_i - \mathsf{n}(r))r^i + \mathsf{d}(r)r^{i+1}$ to obtain a factorization $z'$ in $\mathsf{Z}(x)$ satisfying $|z'| < |z|$. For the reverse implication, suppose that $w = \sum_{i=0}^K \beta_i r^i$ is a factorization in $\mathsf{Z}(x)$ of minimum length. There is no loss in assuming that $K = N$. Note that $\beta_i < \mathsf{n}(r)$ for each $i \in \llb 0, N \rrb$ follows from the direct implication. Now suppose for a contradiction that $w \neq z$, and let $m$ be the smallest nonnegative integer satisfying that $\alpha_m \neq \beta_m$. Then \begin{equation} \label{eq:set length of rational semirings II} (\alpha_m - \beta_m) r^m = \sum_{i = m+1}^N (\beta_i - \alpha_i) r^i. \end{equation} After clearing the denominators in~(\ref{eq:set length of rational semirings II}), it is easy to see that $\mathsf{n}(r) \mid \alpha_m - \beta_m$, which implies that $\alpha_m = \beta_m$, a contradiction. Hence $w = z$ and so $|z| = |w| = \min \mathsf{L}(x)$. We have also proved that there exists a unique factorization of $x$ of minimum length, which is~(2). For the direct implication of~(3), it suffices to observe that if $\alpha_i \ge \mathsf{d}(r)$ for some $i \in \llb 1, N \rrb$, then we can use the identity $\alpha_i r^i = \big( \alpha_i - \mathsf{d}(r) \big) r^i + \mathsf{n}(r) r^{i-1}$ to obtain a factorization $z'$ in $\mathsf{Z}(x)$ satisfying $|z'| > |z|$. For the reverse implication of~(3), take $w = \sum_{i=0}^K \beta_i r^i$ to be a factorization in $\mathsf{Z}(x)$ of maximum length ($S_r$ is a BF-monoid by Proposition~\ref{prop:BF sufficient condition}). Once again, there is no loss in assuming that $K = N$. The maximality of $|w|$ now implies that $\beta_i < \mathsf{d}(r)$ for $i \in \llb 1, N \rrb$. Suppose, by way of contradiction, that $z \neq u$. Then take $m$ be the smallest index such that $\alpha_m \neq \beta_m$. Clearly, $m \ge 1$ and \begin{equation} \label{eq:set length of rational semirings I} (\alpha_m - \beta_m) r^m = \sum_{i=0}^{m-1} (\beta_i - \alpha_i) r^i. \end{equation} After clearing denominators, it is easy to see that $\mathsf{d}(r) \mid \alpha_m - \beta_m$, which contradicts that $0 < |\alpha_M - \beta_M| < \mathsf{d}(r)$. Hence $\alpha_i = \beta_i$ for each $i \in \llb 1, N \rrb$, which implies that $z = w$. Thus, $\max \mathsf{L}(x) = |z|$. In particular, there exists only one factorization of $x$ of maximum length, which is condition~(4). The direct implication of~(5) is trivial. For the reverse implication of~(5), suppose that $\mathsf{L}(x)$ is a singleton. Then any factorization in $\mathsf{Z}(x)$ is a factorization of minimum length. Since we proved in the first paragraph that $\mathsf{Z}(x)$ contains only one factorization of minimum length, we have that $\mathsf{Z}(x)$ is also a singleton. The last statement of~(5) is an immediate consequence of~(1) and~(3). \end{proof} We are in a position now to describe the sets of lengths of any atomic cyclic rational semiring. \begin{theorem} \label{thm:sets of lengths} Take $r \in \mathbb{Q}_{>0}$ such that $S_r$ is atomic. \begin{enumerate} \item If $r < 1$, then for each $x \in S_r$ with $|\mathsf{Z}(x)| > 1$, \[ \mathsf{L}(x) = \big\{ \min \mathsf{L}(x) + k \big( \mathsf{d}(r) - \mathsf{n}(r)\big) \mid k \in \mathbb{N}_0 \big\}. \] \item If $r \in \mathbb{N}$, then $|\mathsf{Z}(x)| = |\mathsf{L}(x)| = 1$ for all $x \in S_r$. \vspace{3pt} \item If $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$, then for each $x \in S_r$ with $|\mathsf{Z}(x)| > 1$, \[ \mathsf{L}(x) = \bigg\{ \min \mathsf{L}(x) + k \big( \mathsf{n}(r) - \mathsf{d}(r) \big) \ \bigg{|} \ 0 \le k \le \frac{\max \mathsf{L}(x) - \min \mathsf{L}(x)}{\mathsf{n}(r) - \mathsf{d}(r)} \bigg\}. \] \end{enumerate} Thus, $\mathsf{L}(x)$ is an arithmetic progression with difference $|\mathsf{n}(r) - \mathsf{d}(r)|$ for all $x \in S_r$. \end{theorem} \begin{proof} To argue~(1), take $x \in S_r$ such that $|\mathsf{Z}(x)| > 1$. Let $z := \sum_{i=0}^N \alpha_i r^i$ be a factorization in $\mathsf{Z}(x)$ with $|z| > \min \mathsf{L}(x)$. Lemma~\ref{lem:factorization of extremal length II} guarantees that $\alpha_i \ge \mathsf{d}(r)$ for some $i \in \llb 1, N \rrb$. Then one can use the identity $\alpha_i r^i = (\alpha_i - \mathsf{d}(r))r^i + \mathsf{n}(r)r^{i-1}$ to find a factorization $z_1 \in \mathsf{Z}(x)$ with $|z_1| = |z| - (\mathsf{d}(r) - \mathsf{n}(r))$. Carrying out this process as many times as necessary, we can obtain a sequence $z_1, \dots, z_n \in \mathsf{Z}(x)$, where $z_n =: \sum_{i=0}^K \alpha'_i r^i$ satisfies that $\alpha'_i < \mathsf{d}(r)$ for $i \in \llb 1, K \rrb$ and $|z_j| = |z| - j(\mathsf{d}(r) - \mathsf{n}(r))$ for $j \in \llb 1,n \rrb$. By Lemma~\ref{lem:factorization of extremal length II}(1), the factorization $z_n$ has minimum length and, therefore, $|z| \in \{ \min \mathsf{L}(x) + k \big( \mathsf{d}(r) - \mathsf{n}(r)\big) \mid k \in \mathbb{N}_0 \}$. Then \begin{align} \label{eq:sets of lengths are arithmetic sequences 1} \mathsf{L}(x) \subseteq \big\{ \min \mathsf{L}(x) + k \big( \mathsf{d}(r) - \mathsf{n}(r)\big) \mid k \in \mathbb{N}_0 \big\}. \end{align} For the reverse inclusion, we check inductively that $\min \mathsf{L}(x) + k (\mathsf{d}(r) - \mathsf{n}(r)) \in \mathsf{L}(x)$ for every $k \in \mathbb{N}_0$. Since $|\mathsf{Z}(x)| > 1$, Lemma~\ref{lem:factorization of extremal length II}(2) guarantees that $|\mathsf{L}(x)| > 1$. Then there exists a factorization of length strictly greater than $\min \mathsf{L}(x)$, and we have already seen that such a factorization can be connected to a minimum-length factorization of $\mathsf{Z}(x)$ by a chain of factorizations in $\mathsf{Z}(x)$ with consecutive lengths differing by $\mathsf{d}(r) - \mathsf{n}(r)$. Therefore $\min \mathsf{L}(x) + (\mathsf{d}(r) - \mathsf{n}(r)) \in \mathsf{L}(x)$. Suppose now that $z = \sum_{i=0}^N \beta_i r^i$ is a factorization in $\mathsf{Z}(x)$ with length $\min \mathsf{L}(x) + k(\mathsf{d}(r) - \mathsf{n}(r))$ for some $k \in \mathbb{N}$. Then by Lemma~\ref{lem:factorization of extremal length II}(1), there exists $i \in \llb 1,N \rrb$ such that $\beta_i \ge \mathsf{d}(r) > \mathsf{n}(r)$. Now using the identity $\beta_i r^i = (\beta_i - \mathsf{n}(r))r^i + \mathsf{d}(r)r^{i+1}$, one can produce a factorization $z' \in \mathsf{Z}(x)$ such that $|z'| = \min \mathsf{L}(x) + (k+1)(\mathsf{d}(r) - \mathsf{n}(r))$. Hence the reverse inclusion follows by induction. Clearly, statement~(2) is a direct consequence of the fact that $r \in \mathbb{N}$ implies that $S_r = (\mathbb{N}_0,+)$. To prove~(3), take $x \in S^\bullet_r$. Since $S_r$ is a BF-monoid, there exists $z \in \mathsf{Z}(x)$ such that $|z| = \max \mathsf{L}(x)$. Take $N \in \mathbb{N}$ and $\alpha_0, \dots, \alpha_N \in \mathbb{N}_0$ such that $z = \sum_{i=0}^N \alpha_i r^i$. If $\alpha_i \ge \mathsf{n}(r)$ for some $i \in \llb 0, N \rrb$, then we can use the identity $\alpha_i r^i = (\alpha_i - \mathsf{n}(r))r^i + \mathsf{d}(r)r^{i+1}$ to find a factorization $z_1 \in \mathsf{Z}(x)$ such that $|z_1| = |z| - (\mathsf{n}(r) - \mathsf{d}(r))$. Carrying out this process as many times as needed, we will end up with a sequence $z_1, \dots, z_n \in \mathsf{Z}(x)$, where $z_n =: \sum_{i=0}^K \beta_i r^i$ satisfies that $\beta_i < \mathsf{n}(r)$ for $i \in \llb 0, K \rrb$ and $|z_j| = |z| - j(\mathsf{n}(r) - \mathsf{d}(r))$ for $j \in \llb 1, n \rrb$. Lemma~\ref{lem:factorization of extremal length I}(1) ensures that $|z_n| = \min \mathsf{L}(x)$. Then \begin{align} \label{eq:sets of lengths are arithmetic sequences 2} \bigg\{ \min \mathsf{L}(x) + j( \mathsf{n}(r) - \mathsf{d}(r) ) \ \bigg{|} \ 0 \le j \le \frac{ \max \mathsf{L}(x) - \min \mathsf{L}(x) }{ \mathsf{n}(r) - \mathsf{d}(r) } \bigg\} \subseteq \mathsf{L}(x). \end{align} On the other hand, we can connect any factorization $w \in \mathsf{Z}(x)$ to the minimum-length factorization $w' \in \mathsf{Z}(x)$ by a chain $w = w_1, \dots, w_t = w'$ of factorizations in $\mathsf{Z}(x)$ so that $|w_i|- |w_{i+1}| = \mathsf{n}(r) - \mathsf{d}(r)$. As a result, both sets involved in the inclusion~(\ref{eq:sets of lengths are arithmetic sequences 2}) are indeed equal. \end{proof} We conclude this section collecting some immediate consequences of Theorem~\ref{thm:sets of lengths}. \begin{cor} \label{bigcor} Take $r \in \mathbb{Q}_{>0}$ such that $S_r$ is atomic. \begin{enumerate} \item $S_r$ is a BF-monoid if and only if $r \ge 1$. \vspace{3pt} \item If $r \in \mathbb{N}$, then $S_r \cong \mathbb{N}_0$ and, as a result, $\Delta(x) = \emptyset$ and $\mathsf{c}(x) = 0$ for all $x \in S_r^\bullet$. \vspace{3pt} \item If $r \notin \mathbb{N}$, then $\Delta(x) = \{ |\mathsf{n}(r) - \mathsf{d}(r)| \}$ for all $x \in S_r$ such that $|\mathsf{Z}(x)| > 1$. Therefore $\Delta(S_r) = \{ |\mathsf{n}(r) - \mathsf{d}(r)| \}$. \vspace{3pt} \item If $r \notin \mathbb{N}$, then $\mathsf{Ca}(S_r) = \max \{\mathsf{n}(r), \mathsf{d}(r) \}$. Therefore $\mathsf{c}(S_r) = \max\{\mathsf{n}(r), \mathsf{d}(r)\}$. \end{enumerate} \end{cor} \begin{remark} Note that Corollary~\ref{bigcor}(4) contrasts with \cite[Theorem~4.2]{OP18} and \cite[Proposition~4.3.1]{GZ19}, where it is proved that most subsets of $\mathbb{N}_0$ can be realized as the set of catenary degrees of a numerical monoid and a Krull monoid (finitely generated with finite class group), respectively. \end{remark} \section{The Elasticity} \label{sec:elasticity} \subsection{The Elasticity} An important factorization invariant related with the sets of lengths of an atomic monoid is the elasticity. Let $M$ be a reduced atomic monoid. The \emph{elasticity} of an element $x \in M^\bullet$, denoted by $\rho(x)$, is defined as \[ \rho(x) = \frac{\sup \mathsf{L}(x)}{\inf \mathsf{L}(x)}. \] By definition, $\rho(0) = 1$. Note that $\rho(x) \in \mathbb{Q}_{\ge 1} \cup \{\infty\}$ for all $x \in M^\bullet$. On the other hand, the \emph{elasticity} of the whole monoid $M$ is defined to be \[ \rho(M) := \sup \{\rho(x) \mid x \in M^\bullet\}. \] The elasticity was introduced by R.~Valenza~\cite{V90} as a tool to measure the phenomenon of non-unique factorizations in the context of algebraic number theory. The elasticity of numerical monoids has been successfully studied in~\cite{CHM}. In addition, the elasticity of atomic monoids naturally generalizing numerical monoids has received substantial attention in the literature in recent years (see, for instance,~\cite{fG19c,GO17,mG19,qZ18}). In this section we focus on aspects of the elasticity of cyclic rational semirings, sharpening for them some of the results established in~\cite{GO17} and~\cite{mG19}. The following formula for the elasticity of an atomic Puiseux monoid in terms of the infimum and supremum of its set of atoms was established in~\cite{GO17}. \begin{theorem} \cite[Theorem~3.2]{GO17} \label{thm:elasticity of PM} Let $M$ be an atomic Puiseux monoid. If $0$ is a limit point of $M^\bullet$, then $\rho(M) = \infty$. Otherwise, \[ \rho(M) = \frac{\sup \mathcal{A}(M)}{\inf \mathcal{A}(M)}. \] \end{theorem} The next result is an immediate consequence of Theorem~\ref{thm:elasticity of PM}. \begin{cor} \label{cor:elasticity of rational semirings} Take $r \in \mathbb{Q}_{>0}$ such that $S_r$ is atomic. Then the following statements are equivalent: \begin{enumerate} \item[(1)] $r \in \mathbb{N}$; \vspace{3pt} \item[(2)] $\rho(S_r) = 1$; \vspace{3pt} \item[(3)] $\rho(S_r)<\infty$. \end{enumerate} Hence, if $S_r$ is atomic, then either $\rho(S_r)=1$ or $\rho(S_r)=\infty$. \end{cor} \begin{proof} To prove that (1) implies (2), suppose that $r \in \mathbb{N}$. In this case, $S_r \cong \mathbb{N}_0$. Since~$\mathbb{N}_0$ is a factorial monoid, $\rho(S_r) = \rho(\mathbb{N}_0) = 1$. Clearly, (2) implies (3). Now assume (3) and that $r \notin \mathbb{N}$. If $r < 1$, then $0$ is a limit point of $S_r^\bullet$ as $\lim_{n \to \infty} r^n = 0$. Therefore it follows by Theorem~\ref{thm:elasticity of PM} that $\rho(S_r) = \infty$. If $r > 1$, then $\lim_{n \to \infty} r^n = \infty$ and, as a result, $\sup \mathcal{A}(S_r) = \infty$. Then Theorem~\ref{thm:elasticity of PM} ensures that $\rho(S_r) = \infty$. Thus, (3) implies~(1). The final statement now easily follows. \end{proof} The elasticity of an atomic monoid $M$ is said to be \emph{accepted} if there exists $x \in M$ such that $\rho(M) = \rho(x)$. \begin{prop}\label{accepted} Take $r \in \mathbb{Q}_{> 0}$ such that $S_r$ is atomic. Then the elasticity of $S_r$ is accepted if and only if $r \in \mathbb{N}$ or $r < 1$. \end{prop} \begin{proof} For the direct implication, suppose that $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$. Corollary~\ref{cor:elasticity of rational semirings} ensures that $\rho(S_r) = \infty$. However, as $0$ is not a limit point of $S_r^\bullet$, it follows by Proposition~\ref{prop:BF sufficient condition} that~$S_r$ is a BF-monoid, and, therefore, $\rho(x) < \infty$ for all $x \in S_r$. As a result, $S_r$ cannot have accepted elasticity For the reverse implication, assume first that $r \in \mathbb{N}$ and, therefore, that $S_r = \mathbb{N}_0$. In this case, $S_r$ is a factorial monoid and, as a result, $\rho(S_r) = \rho(1) = 1$. Now suppose that $r < 1$. Then it follows by Corollary~\ref{cor:elasticity of rational semirings} that $\rho(S_r) = \infty$. In addition, for $x = \mathsf{n}(r) \in S_r$ Lemma~\ref{lem:factorization of extremal length II}(1) and Theorem~\ref{thm:atomic classification of multiplicative cyclic Puiseux monoids}(1) guarantee that \[ \mathsf{L}(x) = \big\{ \mathsf{n}(r) + k \big( \mathsf{d}(r) - \mathsf{n}(r) \big) \mid k \in \mathbb{N}_0 \big\}. \] Because $\mathsf{L}(x)$ is an infinite set, we have that $\rho(S_r) = \infty = \rho(x)$. Hence $S_r$ has accepted elasticity, which completes the proof. \end{proof} \subsection{The Set of Elasticities} For an atomic monoid $M$ the set \[ R(M) = \{ \rho(x) \mid x \in M\} \] is called the \emph{set of elasticities} of $M$, and $M$ is called \emph{fully elastic} if $R(M) = \mathbb{Q} \cap [1, \rho(M)]$ when $\infty \notin R(M)$ and $R(M) \setminus \{\infty\} = \mathbb{Q} \cap [1, \infty)$ when $\infty \in R(M)$. Let us proceed to describe the sets of elasticities of atomic cyclic rational semirings. \begin{prop} \label{prop:set of elasticities} Take $r \in \mathbb{Q}_{> 0}$ such that $S_r$ is atomic. \begin{enumerate} \item If $r < 1$, then $R(S_r) = \{1, \infty\}$ and, therefore, $S_r$ is not fully elastic. \vspace{3pt} \item If $r \in \mathbb{N}$, then $R(S_r) = \{1\}$ and, therefore, $S_r$ is fully elastic. \vspace{3pt} \item If $r \in \mathbb{Q}_{> 0} \setminus \mathbb{N}$ and $\mathsf{n}(r) = \mathsf{d}(r) + 1$, then $S_r$ is fully elastic, in which case $R(S_r) = \mathbb{Q}_{\ge 1}$. \end{enumerate} \end{prop} \begin{proof} First, suppose that $r < 1$. Take $x \in S_r$ such that $|\mathsf{Z}(x)| > 1$. It follows by Theorem~\ref{thm:sets of lengths}(1) that $\mathsf{L}(x)$ is an infinite set, which implies that $\rho(x) = \infty$. As a result, $\rho(S_r) = \{1,\infty\}$ and then $S_r$ is not fully elastic. To argue~(2), it suffices to observe that $r \in \mathbb{N}$ implies that $S_r = (\mathbb{N}_0,+)$ is a factorial monoid and, therefore, $\rho(S_r) = \{1\}$. Finally, let us argue that $S_r$ is fully elastic when $\mathsf{n}(r) = \mathsf{d}(r) + 1$. To do so, fix $q \in \mathbb{Q}_{>1}$. Take $m \in \mathbb{N}$ such that $m \mathsf{d}(q) > \mathsf{d}(r)$, and set $k = m \big( \mathsf{n}(q) - \mathsf{d}(q) \big)$. Let $t = m \mathsf{d}(q) - \mathsf{d}(r)$, and consider the factorizations $z = \mathsf{d}(r) r^k + \sum_{i=1}^{t} r^{k+i} \in \mathsf{Z}(S_r)$ and $z' = \mathsf{d}(r) \cdot 1 + \sum_{i=0}^{k-1} r^i + \sum_{i=1}^{t} r^{k+i} \in \mathsf{Z}(S_r)$. Since $\mathsf{n}(r) = \mathsf{d}(r) + 1$, it can be easily checked that $\frac{1}{r-1} = \mathsf{d}(r)$. As \[ \mathsf{d}(r) + \sum_{i=0}^{k-1} r^i + \sum_{i=1}^{t} r^{k+i}= \mathsf{d}(r) + \frac{r^k - 1}{r-1} + \sum_{i=1}^{t} r^{k+i} = \mathsf{d}(r)r^k + \sum_{i=1}^{t} r^{k+i}, \] there exists $x \in S_r$ such that $z,z' \in \mathsf{Z}(x)$. By Lemma~\ref{lem:factorization of extremal length I} it follows that $z$ is a factorization of $x$ of minimum length and $z'$ is a factorization of $x$ of maximum length. Thus, \[ \rho(x) = \frac{|z'|}{|z|} = \frac{\mathsf{d}(r) + k + t}{\mathsf{d}(r) + t} = \frac{m \, \mathsf{n}(q)}{m \, \mathsf{d}(q)} = q. \] As $q$ was arbitrarily taken in $\mathbb{Q}_{>1}$, it follows that $R(S_r) = \mathbb{Q}_{\ge 1}$. Hence $S_r$ is fully elastic when $\mathsf{n}(r) = \mathsf{d}(r) + 1$. \end{proof} \medskip We were unable to determine in Proposition~\ref{prop:set of elasticities} whether $S_r$ is fully elastic when $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$ with $\mathsf{n}(r) \neq \mathsf{d}(r) + 1$. However, we prove in Proposition \ref{prop:the set of elasticity of S_r is dense when r > 1} that the set of elasticities of $S_r$ is dense in $\mathbb{R}_{\ge 1}$. \begin{prop} \label{prop:the set of elasticity of S_r is dense when r > 1} If $r \in \mathbb{Q}_{>1} \setminus \mathbb{N}$, then the set $R(S_r)$ is dense in $\mathbb{R}_{\ge 1}$. \end{prop} \begin{proof} Since $\sup \mathcal{A}(S_r) = \infty$, it follows by Theorem~\ref{thm:elasticity of PM} that $\rho(S_r) = \infty$. This, along with the fact that $S_r$ is a BF-monoid (because of Proposition~\ref{prop:BF sufficient condition}), ensures the existence of a sequence $\{x_n\}$ of elements of $S_r$ such that $\lim_{n \to \infty} \rho(x_n) = \infty$. Then it follows by~\cite[Lemma~5.6]{GO17} that the set \[ S := \bigg\{ \frac{\mathsf{n}(\rho(x_n)) + k}{\mathsf{d}(\rho(x_n)) + k} \ \bigg{|} \ n,k \in \mathbb{N} \bigg\} \] is dense in $\mathbb{R}_{\ge 1}$. Fix $n,k \in \mathbb{N}$. Take $m \in \mathbb{N}$ such that $r^m$ is the largest atom dividing $x_n$ in $S_r$. Now take $K := k \gcd(\min \mathsf{L}(x_n), \max \mathsf{L}(x_n))$. Consider the element $y_{n,k} := x_n + \sum_{i=1}^K r^{m + i} \in S_r$. It follows by Lemma~\ref{lem:factorization of extremal length I} that $x_n$ has a unique minimum-length factorization and a unique maximum-length factorization; let them be $z_0$ and $z_1$, respectively. Now consider the factorizations $w_0 := z_0 + \sum_{i=1}^K r^{m + i} \in \mathsf{Z}(y_{n,k})$ and $w_1 := z_1 + \sum_{i=1}^K r^{m + i} \in \mathsf{Z}(y_{n,k})$. Once again, we can appeal to Lemma~\ref{lem:factorization of extremal length I} to ensure that $w_0$ and $w_1$ are the minimum-length and maximum-length factorizations of $y_{n,k}$. Therefore $\min \mathsf{L}(y_{n,k}) = \min \mathsf{L}(x_n) + K$ and $\max \mathsf{L}(y_{n,k}) = \max \mathsf{L}(x_n) + K$. Then we have \[ \rho(y_{n,k}) = \frac{\max \mathsf{L}(y_{n,k})}{\min \mathsf{L}(y_{n,k})} = \frac{\max \mathsf{L}(x_n) + K}{\min \mathsf{L}(x_n) + K} = \frac{\mathsf{n}(\rho(x_n)) + k}{\mathsf{d}(\rho(x_n)) + k}. \] Since $n$ and $k$ were arbitrarily taken, it follows that $S$ is contained in $R(S_r)$. As $S$ is dense in $\mathbb{R}_{\ge 1}$ so is $R(S_r)$, which concludes our proof. \end{proof} \begin{cor} The set of elasticities of $S_r$ is dense in $\mathbb{R}_{\ge 1}$ if and only if $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$. \end{cor} \begin{remark} Proposition~\ref{prop:the set of elasticity of S_r is dense when r > 1} contrasts with the fact that the elasticity of a numerical monoid is always nowhere dense in $\mathbb{R}$~\cite[Corollary~2.3]{CHM}. \end{remark} Wishing to have a full picture of the sets of elasticities of cyclic rational semirings, we propose the following conjecture. \begin{conj} For $r \in \mathbb{Q}_{>1} \setminus \mathbb{N}$ such that $\mathsf{n}(r) > \mathsf{d}(r) + 1$, the monoid $S_r$ is fully elastic. \end{conj} \subsection{Local Elasticities and Unions of Sets of Lengths} For a nontrivial reduced monoid $M$ and $k \in \mathbb{N}$, we let $\mathcal{U}_k(M)$ denote the union of sets of lengths containing $k$, that is, $\mathcal{U}_k(M)$ is the set of $\ell \in \mathbb{N}$ for which there exist atoms $a_1, \dots, a_k, b_1, \dots, b_\ell$ such that $a_1 \dots a_k = b_1 \dots b_\ell$. The set $\mathcal{U}_k(M)$ is known as the \emph{union of sets of lengths} of $M$ containing~$k$. In addition, we set \[ \lambda_k(M) := \min \, \mathcal{U}_k(M) \quad \text{and} \quad \rho_k(M) := \sup \, \mathcal{U}_k(M), \] and we call $\rho_k(M)$ the \emph{k-th local elasticity} of $M$. Unions of sets of lengths have received a great deal of attention in recent literature; see, for example, \cite{BS18,BGG11,FGKT17,sT19}. In particular, the unions of sets of lengths and the local elasticities of Puiseux monoids have been considered in~\cite{mG19}. By~\cite[Section~1.4]{GH06}, the elasticity of an atomic monoid can be expressed in terms of its local elasticities as follows \[ \rho(M) = \sup \bigg\{ \frac{\rho_k(M)}{k} \ \bigg{|} \ k \in \mathbb{N} \bigg\} = \lim_{k \to \infty} \frac{\rho_k (M)}{k}. \] Let us conclude this section studying the unions of sets of lengths and the local elasticities of atomic cyclic rational semirings. \begin{prop}\label{local} Take $r \in \mathbb{Q}_{> 0}$ such that $S_r$ is atomic. Then $\mathcal{U}_k(S_r)$ is an arithmetic progression containing $k$ with distance $|\mathsf{n}(r) - \mathsf{d}(r)|$ for every $k \in \mathbb{N}$. More specifically, the following statements hold. \begin{enumerate} \item If $r < 1$, then \begin{itemize} \item $\mathcal{U}_k(S_r) = \{k\}$ if $k < \mathsf{n}(r)$, \vspace{2pt} \item $\mathcal{U}_k(S_r) = \{ k + j (\mathsf{d}(r) - \mathsf{n}(r)) \mid j \in \mathbb{N}_0 \}$ if $\mathsf{n}(r) \le k < \mathsf{d}(r)$, and \vspace{2pt} \item $\mathcal{U}_k(S_r) = \{ k + j (\mathsf{d}(r) - \mathsf{n}(r)) \mid j \in \mathbb{Z}_{\ge \ell} \}$ for some $\ell \in \mathbb{Z}_{< 0}$ if $k \ge \mathsf{d}(r)$. \end{itemize} \vspace{3pt} \item If $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$, then \begin{itemize} \item $\mathcal{U}_k(S_r) = \{k\}$ if $k < \mathsf{d}(r)$, \vspace{2pt} \item $\mathcal{U}_k(S_r) = \{ k + j (\mathsf{n}(r) - \mathsf{d}(r)) \mid j \in \mathbb{N}_0 \}$ if $\mathsf{d}(r) \le k < \mathsf{n}(r)$, and \vspace{2pt} \item $\mathcal{U}_k(S_r) = \{ k + j (\mathsf{n}(r) - \mathsf{d}(r)) \mid j \in \mathbb{Z}_{\ge \ell} \}$ for some $\ell \in \mathbb{Z}_{< 0}$ if $k \ge \mathsf{n}(r)$. \end{itemize} \vspace{3pt} \item If $r \in \mathbb{N}$, then $\mathcal{U}_k(S_r) = \{k\}$ for every $k \in \mathbb{N}$. \end{enumerate} \end{prop} \begin{proof} That $\mathcal{U}_k(S_r)$ is an arithmetic progression containing $k$ with distance $| \mathsf{n}(r) - \mathsf{d}(r)|$ is an immediate consequence of Theorem~\ref{thm:sets of lengths}. To show~(1), assume that $r < 1$. Suppose first that $k < \mathsf{n}(r)$. Take $L \in \mathcal{L}(S_r)$ with $k \in L$, and take $x \in S_r$ such that $L = \mathsf{L}(x)$. Choose $z = \sum_{i=0}^N \alpha_i r^i \in \mathsf{Z}(x)$ with $\sum_{i=0}^N \alpha_i = k$. Since $\alpha_i \le k < \mathsf{n}(r)$ for $i \in \llb 0, N \rrb$, Lemma~\ref{lem:factorization of extremal length II} ensures that $|\mathsf{Z}(x)| = 1$, which yields $L = \mathsf{L}(x) = \{k\}$. Thus, $\mathcal{U}_k(S_r) = \{k\}$. Now suppose that $\mathsf{n}(r) \le k < \mathsf{d}(r)$. Notice that the element $k \in S_r$ has a factorization of length $k$, namely, $k \cdot 1 \in \mathsf{Z}(k)$. Now we can use Lemma~\ref{lem:factorization of extremal length II}(3) to conclude that $\sup \mathsf{L}(k) = \infty$. Hence $\rho_k(S_r) = \infty$. On the other hand, let $x$ be an element of $S_r$ having a factorization of length $k$. Since $k < \mathsf{d}(r)$, it follows by Lemma~\ref{lem:factorization of extremal length II}(1) that any length-$k$ factorization in $\mathsf{Z}(x)$ is a factorization of~$x$ of minimum length. Hence $\lambda_k(S_r) = k$ and, therefore, \[ \mathcal{U}_k(S_r) = \{ k + j (\mathsf{d}(r) - \mathsf{n}(r)) \mid j \in \mathbb{N}_0 \}. \] Now assume that $k \ge \mathsf{d}(r)$. As $k \ge \mathsf{n}(r)$, we have once again that $\rho_k(S_r) = \infty$. Also, because $k \ge \mathsf{d}(r)$ one finds that $(k - \mathsf{d}(r))r + \mathsf{n}(r) \cdot 1$ is a factorization in $\mathsf{Z}(kr)$ of length $k - (\mathsf{d}(r) - \mathsf{n}(r))$. Then there exists $\ell \in \mathbb{Z}_{< 0}$ such that \[ \mathcal{U}_k(S_r) = \{ k + j (\mathsf{d}(r) - \mathsf{n}(r)) \mid j \in \mathbb{Z}_{\ge \ell} \}. \] Suppose now that $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$. Assume first that $k < \mathsf{d}(r)$. Take $L \in \mathcal{L}(S_r)$ containing $k$ and $x \in S_r$ such that $L = \mathsf{L}(x)$. If $z = \sum_{i=0}^N \alpha_i r^i \in \mathsf{Z}(x)$ satisfies $|z| = k$, then $\alpha_i \le k < \mathsf{d}(r)$ for $i \in \llb 0,N \rrb$, and Lemma~\ref{lem:factorization of extremal length I} implies that $L = \mathsf{L}(x) = \{k\}$. As a result, $\mathcal{U}_k(S_r) = \{k\}$. Suppose now that $\mathsf{d}(r) \le k < \mathsf{n}(r)$. In this case, for each $n > k$, we can consider the element $x_n = k r^n \in S_r$ and set $L_n := \mathsf{L}(x_n)$. It is not hard to check that \[ z_n := \mathsf{n}(r) \cdot 1 + \bigg( \sum_{i=1}^{n-1} \big( \mathsf{n}(r) - \mathsf{d}(r)\big) r^i \bigg) + \big( k - \mathsf{d}(r) \big) r^n \] is a factorization of $x_n$. Therefore $|z_n| = k + n( \mathsf{n}(r) - \mathsf{d}(r)) \in L_n$. Since $k \in L_n$ for every $n \in \mathbb{N}$, it follows that $\rho_k(S_r) = \infty$. On the other hand, it follows by Lemma~\ref{lem:factorization of extremal length I}(1) that any factorization of length $k$ of an element $x \in S_r$ must be a factorization of minimum length in $\mathsf{Z}(x)$. Hence $\lambda_k(S_r) = k$, which implies that \[ \mathcal{U}_k(S_r) = \{ k + j (\mathsf{n}(r) - \mathsf{d}(r)) \mid j \in \mathbb{N}_0 \}. \] Assume now that $k \ge \mathsf{n}(r)$. As $k \ge \mathsf{d}(r)$ we still obtain $\rho_k(S_r) = \infty$. In addition, because $k \ge \mathsf{n}(r)$, we have that $(k - \mathsf{n}(r)) \cdot 1 + \mathsf{d}(r)r$ is a factorization in $\mathsf{Z}(k)$ having length $k - (\mathsf{n}(r) - \mathsf{d}(r))$. Thus, there exists $\ell \in \mathbb{Z}_{< 0}$ such that \[ \mathcal{U}_k(S_r) = \{ k + j (\mathsf{n}(r) - \mathsf{d}(r)) \mid j \in \mathbb{Z}_{\ge \ell} \}. \] Finally, condition~(3) follows directly from the fact that $S_r = (\mathbb{N}_0,+)$ when $r \in \mathbb{N}$ and, therefore, for every $k \in \mathbb{N}$ there exists exactly one element in $S_r$ having a length-$k$ factorization, namely $k$ \end{proof} \begin{cor} \label{cor:local elastiicity} Take $r \in \mathbb{Q}_{>0}$ such that $S_r$ is atomic. Then $\rho(S_r) < \infty$ if and only if $\rho_k(S_r) < \infty$ for every $k \in \mathbb{N}$. \end{cor} \begin{proof} It follows from \cite[Proposition~1.4.2(1)]{GH06} that $\rho_k(S_r) \le k \rho(S_r)$, which yields the direct implication. For the reverse implication, we first notice that, by Proposition~\ref{local}, if $r \notin \mathbb{N}$ and $k > \max \{\mathsf{n}(r), \mathsf{d}(r)\}$, then $\rho_k(S_r) = \infty$. Hence the fact that $\rho_k(S_r) < \infty$ for every $k \in \mathbb{N}$ implies that $r \in \mathbb{N}$. In this case $\rho(S_r) = \rho(\mathbb{N}_0) = 1$, and so $\rho(S_r) < \infty$. \end{proof} As~\cite[Proposition~1.4.2(1)]{GH06} holds for every atomic monoid, the direct implication of Corollary~\ref{cor:local elastiicity} also holds for any atomic monoid. However, the reverse implication of the same corollary is not true even in the context of Puiseux monoids. \begin{example} Let $\{p_n\}$ be a strictly increasing sequence of primes, and consider the Puiseux monoid \[ M := \bigg\langle \frac{p_n^2 + 1}{p_n} \ \bigg{|} \ n \in \mathbb{N} \bigg \rangle. \] It is not hard to verify that the monoid $M$ is atomic with set of atoms given by the displayed generating set. Then it follows from \cite[Theorem~3.2]{GO17} that $\rho(S_r) = \infty$. However, \cite[Theorem~4.1(1)]{mG19} guarantees that $\rho_k(M) < \infty$ for every $k \in \mathbb{N}$. \end{example} \section{The Tame Degree} \label{sec:tame degree} \subsection{Omega Primality} Let $M$ be a reduced atomic monoid. The \emph{omega function} $\omega \colon M \to \mathbb{N}_0 \cup \{\infty\}$ is defined as follows: for each $x \in M^\bullet$ we take $\omega(x)$ to be the smallest $n \in \mathbb{N}$ satisfying that whenever $x \mid_M \sum_{i=1}^t a_i$ for some $a_1, \dots, a_t \in \mathcal{A}(M)$, there exists $T \subseteq \llb 1, t \rrb$ with $|T| \le n$ such that $x \mid_M \sum_{i \in T} a_i$. If no such $n$ exists, then $\omega(x) = \infty$. In addition, we define $\omega(0) = 0$. Then we define \[ \omega(M) := \sup\{\omega(a) \mid a \in \mathcal{A}(M)\}. \] Notice that $\omega(x) = 1$ if and only if $x$ is prime in $M$. The omega function was introduced by Geroldinger and Hassler in~\cite{GH08} to measure how far in an atomic monoid an element is from being prime. Before proving the main results of this section, let us collect two technical lemmas. \begin{lemma} \label{lem:element divisible by 1} If $r \in \mathbb{Q}_{> 1}$, then $1 \mid_{S_r} \mathsf{d}(r) r^k$ for every $k \in \mathbb{N}_0$. \end{lemma} \begin{proof} If $r \in \mathbb{N}$, then $S_r = (\mathbb{N}_0,+)$ and the statement of the lemma follows straightforwardly. Then we assume that $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$. For $k=0$, the statement of the lemma holds trivially. For $k \in \mathbb{N}$, consider the factorization $z_k := \mathsf{d}(r) \, r^k \in \mathsf{Z}(S_r)$. The factorization \[ z := \mathsf{n}(r) + \sum_{i=1}^{k-1} (\mathsf{n}(r) - \mathsf{d}(r)) r^i \] belongs to $\mathsf{Z}(\phi(z_k))$ (recall that $\phi \colon \mathsf{Z}(S_r) \to S_r$ is the factorization homomorphism of~$S_r$). This is because \begin{align*} \mathsf{n}(r) + \sum_{i=1}^{k-1} (\mathsf{n}(r) - \mathsf{d}(r)) r^i &= \mathsf{n}(r) + \sum_{i=1}^{k-1} \mathsf{n}(r) r^i - \sum_{i=1}^{k-1} \mathsf{d}(r) r^i \\ &= \mathsf{n}(r) + \sum_{i=1}^{k-1} \mathsf{n}(r) r^i - \sum_{i=1}^{k-1} \mathsf{n}(r) r^{i-1} = \mathsf{d}(r) r^k. \end{align*} Hence $1 \mid_{S_r} \mathsf{d}(r) r^k$ \end{proof} \begin{lemma} \label{lem:1 dividies constant coefficient of min-length factorization} Take $r \in \mathbb{Q} \cap (0,1)$ such that $S_r$ is atomic, and let $\sum_{i=0}^N \alpha_i r^i$ be the factorization in $\mathsf{Z}(x)$ of minimum length. Then $\alpha_0 \ge 1$ if and only if $1 \mid_{S_r} x$. \end{lemma} \begin{proof} The direct implication is straightforward. For the reverse implication, suppose that $1 \mid_{S_r} x$. Then there exists a factorization $z' := \sum_{i=0}^K \beta_i r^i \in \mathsf{Z}(x)$ such that $\beta_0 \ge 1$. If $\beta_i \ge \mathsf{d}(r)$ for some $i \in \llb 1,K \rrb$, then we can use the identity $\mathsf{d}(r) r^i = \mathsf{n}(r) r^{i-1}$ to find another factorization $z'' \in \mathsf{Z}(x)$ such that $|z''| < |z'|$. Notice that the atom $1$ appears in~$z''$. Then we can replace $z'$ by $z''$. After carrying out such a replacement as many times as possible, we can guarantee that $\beta_i < \mathsf{d}(r)$ for $i \in \llb 1,K \rrb$. Then Lemma~\ref{lem:factorization of extremal length II}(1) ensures that $z'$ is a minimum-length factorization of $x$. Now Lemma~\ref{lem:factorization of extremal length II}(2) implies that $z' = z$. Finally, $\alpha_0 = \beta_0 \ge 1$ follows from the fact that the atom $1$ appears in~$z'$. \end{proof} \begin{prop} \label{prop:omega primality} Take $r \in \mathbb{Q}_{> 0}$ such that $S_r$ is atomic. \begin{enumerate} \item If $r<1$, then $\omega(1) = \infty$. \vspace{3pt} \item If $r \in \mathbb{N}$, then $\omega(1) = 1$. \vspace{3pt} \item If $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$, then $\omega(1) = \mathsf{d}(r)$. \end{enumerate} \end{prop} \begin{proof} To verify~(1), suppose that $r < 1$. Then set $x = \mathsf{n}(r) \in S_r$ and note that $1 \mid_{S_r} x$. Fix an arbitrary $N \in \mathbb{N}$. Take now $n \in \mathbb{N}$ such that $\mathsf{d}(r) + n( \mathsf{d}(r) - \mathsf{n}(r)) \ge N$. It is not hard to check that \[ z := \mathsf{d}(r) r^{n+1} + \sum_{i=1}^n (\mathsf{d}(r) - \mathsf{n}(r) ) r^i \] is a factorization in $\mathsf{Z}(x)$. Suppose that $z' = \sum_{i=1}^K \alpha_i r^i$ is a sub-factorization of $z$ such that $1 \mid_{S_r} x' := \phi(z')$. Now we can move from $z'$ to a factorization $z''$ of $x'$ of minimum length by using the identity $\mathsf{d}(r)r^{i+1} = \mathsf{n}(r)r^i$ finitely many times. As $1 \mid_{S_r} x'$, it follows by Lemma~\ref{lem:1 dividies constant coefficient of min-length factorization} that the atom $1$ appears in $z''$. Therefore, when we obtained $z''$ from~$z'$ (which does not contain $1$ as a formal atom), we must have applied the identity $\mathsf{d}(r)r = \mathsf{n}(r) \cdot 1$ at least once. As a result $z''$ contains at least $\mathsf{n}(r)$ copies of the atom~$1$. This implies that $x' = \phi(z'') \ge \mathsf{n}(r) = x$. Thus, $x' = x$, which implies that $z'$ is the whole factorization~$z$. As a result, $\omega(1) \ge |z| \ge N$. Since $N$ was arbitrarily taken, we can conclude that $\omega(1) = \infty$, as desired. Notice that~(2) is a direct consequence of the fact that $1$ is a prime element in $S_r = (\mathbb{N}_0,+)$. Finally, we prove~(3). Take $z = \sum_{i=0}^N \alpha_i r^i \in \mathsf{Z}(x)$ for some $x \in S_r$ such that $1 \mid_{S_r} x$. We claim that there exists a sub-factorization $z'$ of $z$ such that $|z'| \le \mathsf{d}(r)$ and $1 \mid_{S_r} \phi(z')$, where $\phi$ is the factorization homomorphism of $S_r$. If $\alpha_0 > 0$, then $1$ is one of the atoms showing in $z$ and our claim follows trivially. Therefore assume that $\alpha_0 = 0$. Since $1 \mid_{S_r} x$ and $1$ does not show in $z$, we have that $|\mathsf{Z}(x)| > 1$. Then conditions~(1) and~(3) in Lemma~\ref{lem:factorization of extremal length I} cannot be simultaneously true, which implies that $\alpha_i \ge \mathsf{d}(r)$ for some $i \in \llb 1, N \rrb$. Lemma~\ref{lem:element divisible by 1} ensures now that $1 \mid_{S_r} \phi(z')$ for the sub-factorization $z' := \mathsf{d}(r)r^i$ of $z$. This proves our claim and implies that $\omega(1) \le \mathsf{d}(r)$. On the other hand, take $w$ to be a strict sub-factorization of $\mathsf{d}(r) \, r$. Note that the atom $1$ does not appear in $w$. In addition, it follows by Lemma~\ref{lem:factorization of extremal length I} that $|\mathsf{Z}(\phi(w))| = 1$. Hence $1 \nmid_{S_r} \phi(w)$. As a result, we have that $\omega(1) \ge \mathsf{d}(r)$, and~(3) follows. \end{proof} \subsection{Tameness} For an atom $a \in \mathcal{A}(M)$, the {\it local tame degree} $\mathsf{t}(a) \in \mathbb{N}_0$ is the smallest $n \in \mathbb{N}_0 \cup \{\infty\}$ such that in any given factorization of $x \in a + M$ at most $n$ atoms have to be replaced by at most $n$ new atoms to obtain a new factorization of $x$ that contains $a$. More specifically, it means that $\mathsf{t}(a)$ is the smallest $n \in \mathbb{N}_0 \cup \{\infty\}$ with the following property: if $\mathsf{Z}(x) \cap (a + \mathsf{Z}(M)) \ne \emptyset$ and $z \in \mathsf{Z}(x)$, then there exists a $z' \in \mathsf{Z}(x) \cap (a + \mathsf{Z}(M))$ such that $\mathsf{d}(z,z') \le n$. \begin{definition} An atomic monoid $M$ is said to be {\it locally tame} provided that $\mathsf{t}(a) < \infty$ for all $a \in \mathcal{A}(M)$. \end{definition} \noindent Every factorial monoid is locally tame (see \cite[Theorem~1.6.6 and Theorem~1.6.7]{GH06}). In particular, $(\mathbb{N}_0,+)$ is locally tame. The tame degree of numerical monoids was first considered in~\cite{CGL09}. The factorization invariant $\tau \colon M \to \mathbb{N}_0 \cup \{\infty\}$, which was introduced in~\cite{GH08}, is defined as follows: for $k \in \mathbb{N}$ and $b \in M$, we take \[ \mathsf{Z}_{\text{min}}(k,b) := \bigg\{ \sum_{i=1}^j a_i \in \mathsf{Z}(M) \ \bigg{|} \ j \le k, \ b \mid_M \sum_{i=1}^j a_i, \, \text{ and } \, b \nmid_M \sum_{i \in I} a_i \ \text{ for any } \ I \subsetneq \llb 1,j \rrb \bigg\} \] and then we set \[ \tau(b) = \sup_k \sup_z \big\{ \min \mathsf{L}\big(\phi(z) - b \big) \mid z \in \mathsf{Z}_{\text{min}}(k,b)\big\}. \] The monoid $M$ is called {\it (globally) tame} provided that the \emph{tame degree} \[ \mathsf{t}(M) = \sup \{\mathsf{t}(a) \mid a \in \mathcal{A}(M)\} < \infty. \] The following result will be used in the proof of Theorem~\ref{thm:cyclic semirings are no locally tame}. \begin{theorem} \cite[Theorem~3.6]{GH08} \label{thm:characterization of locally tame monoids} Let $M$ be a reduced atomic monoid. Then $M$ is locally tame if and only if $\omega(a) < \infty$ and $\tau(a) < \infty$ for all $a \in \mathcal{A}(M)$. \end{theorem} We conclude this section by characterizing the cyclic rational semirings that are locally tame. \begin{theorem} \label{thm:cyclic semirings are no locally tame} Take $r \in \mathbb{Q}_{>0}$ such that $S_r$ is atomic. Then the following conditions are equivalent: \begin{enumerate} \item $r \in \mathbb{N}$; \vspace{3pt} \item $\omega(S_r) < \infty$; \vspace{3pt} \item $S_r$ is globally tame; \vspace{3pt} \item $S_r$ is locally tame. \end{enumerate \end{theorem} \begin{proof} That (1) implies (2) follows from Proposition~\ref{prop:omega primality}(2). Now suppose that (2) holds. Then~\cite[Proposition~3.5]{GK10} ensures that $\mathsf{t}(S_r) \le \omega(S_r)^2 < \infty$, which implies~(3). In addition, (3) implies (4) trivially. To prove that (4) implies (1) suppose, by way of contradiction, that $r \in \mathbb{Q}_{> 0} \setminus \mathbb{N}$. Let us assume first that $r < 1$. In this case, $\omega(1) = \infty$ by Proposition~\ref{prop:omega primality}(3). Then it follows by Theorem~\ref{thm:characterization of locally tame monoids} that $S_r$ is not locally tame, which is a contradiction. For the rest of the proof, we assume that $r \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$. We proceed to show that $\tau(1) = \infty$. For $k \in \mathbb{N}$ such that $k \ge \mathsf{d}(r)$, consider the factorization $z_k = \mathsf{d}(r) r^k \in \mathsf{Z}(S_r)$. Since any strict sub-factorization $z'_k$ of $z_k$ is of the form $\beta r^k$ for some $\beta < \mathsf{d}(r)$, it follows by Lemma~\ref{lem:factorization of extremal length I} that $|\mathsf{Z}(z'_k)| = 1$. On the other hand, $1 \mid_{S_r} \mathsf{d}(r) r^k$ by Lemma~\ref{lem:element divisible by 1}. Therefore $z_k \in \mathsf{Z}_{\text{min}}(k, 1)$. Now consider the factorization \[ z'_k := (\mathsf{n}(r) - 1) \cdot 1 + \sum_{i=1}^{k-1} (\mathsf{n}(r) - \mathsf{d}(r)) r^i. \] Proceeding as in the proof of Lemma~\ref{lem:element divisible by 1}, one can verify that $\phi(z'_k) = \mathsf{d}(r)r^k - 1$. In addition, the coefficients of the atoms $1, \dots, r^{k-1}$ in $z'_k$ are all strictly less than~$\mathsf{n}(r)$. Then it follows from Lemma~\ref{lem:factorization of extremal length I}(1) that $z'_k$ is a factorization of $\mathsf{d}(r)r^k - 1$ of minimum length. Because $|z'_k| = k(\mathsf{n}(r) - \mathsf{d}(r)) + \mathsf{d}(r) - 1$, one has that \begin{align*} \tau(1) &= \sup_k \sup_z \big\{ \min \mathsf{L}\big(\phi(z) - 1 \big) \mid z \in \mathsf{Z}_{\text{min}}(k,1)\big\} \\ &\ge \sup_k \min \mathsf{L}\big( \phi(z_k) - 1 \big) = \sup_k |z'_k| \\ &= \lim_{k \to \infty} k(\mathsf{n}(r) - \mathsf{d}(r)) + \mathsf{d}(r) - 1 \\ &= \infty. \end{align*} Hence $\tau(1) = \infty$. Then it follows by Theorem~\ref{thm:characterization of locally tame monoids} that $S_r$ is not locally tame, which contradicts condition~(3). Thus, (3) implies (1), as desired. \end{proof} \medskip \section{Summary} We close in Table~\ref{Table 3} with a comparison between the various factorization invariants we have studied for a Puiseux monoid $S_r := \langle r^n \mid n \in \mathbb{N}_0 \rangle$ generated by a geometric sequence and those for a numerical monoid generated by an arithmetic sequence, namely, \[ N := \langle n, n+d, \dots, n+kd \rangle, \] where $n$, $d$, and $k$ are positive integers with $k \le n-1$. Note that the corresponding results we obtain for the monoid $S_r$ were obtained for the monoid $N$ in the series of five papers \cite{ACHP07,ACKT11,BCKR,CGL09,CHM}, which appeared over a five-year period (2006--2011). {\footnotesize \begin{table}[t] \caption{Monoidal Factorization Invariant Comparison}\label{Table 3} \begin{tabular}{ | p{7cm} | p{7cm} |} \hline \rowcolor{bleudefrance} \textbf{Numerical monoids of the form} $\mathbf{N=\langle n, n+d, \dots, n+kd\rangle}$ & \textbf{Puiseux monoids of the form} $\mathbf{S_r= \big\langle r^n \mid n \in \mathbb{N}_0 \rangle}$ \\ \hline \rowcolor{blizzardblue}\multicolumn{2}{|c|}{System of sets of lengths}\\ \hline Sets of lengths in $N$ are arithmetic progressions \cite[Thm.~3.9]{BCKR} \cite[Thm.~2.2]{ACHP07}. By these results, $\Delta(N)=\{d\}$. & Sets of lengths in $S_r$ are arithmetic progressions (Theorem~\ref{thm:sets of lengths}). A a consequence, $\Delta(S_r)= \{| \mathsf{n}(r)-\mathsf{d}(r)| \}$. \\ \hline \rowcolor{blizzardblue} \multicolumn{2}{|c|}{Elasticity}\\ \hline $\rho(N) = \frac{n+dk}{n}$ is accepted~\cite[Thm.~2.1]{CHM} and fully elastic only when $N = \mathbb{N}_0$~\cite[Thm.~2.2]{CHM}. & If $S_r$ is atomic, then $\rho(S_r) \in \{1,\infty\}$ (Corollary \ref{cor:elasticity of rational semirings}). Moreover, $\rho(M)$ is accepted if and only if $r < 1$ or $r \in \mathbb{N}$ (Proposition \ref{accepted}). $S_r$ is fully elastic when $\mathsf{n}(r) = \mathsf{d}(r) + 1$ (Proposition~\ref{prop:set of elasticities}). \\ \hline \rowcolor{blizzardblue} \multicolumn{2}{|c|}{Catenary degree}\\ \hline $\mathsf{c}(N) = \left\lceil \frac{n}{k}\right\rceil +d$ \cite[Thm.~14]{CGL09} & If $S_r$ is atomic, then $\mathsf{c}(S_r) = \max\{\mathsf{n}(r), \mathsf{d}(r)\}$ (Corollary \ref{bigcor}) \\ \hline \rowcolor{blizzardblue} \multicolumn{2}{|c|}{Tame degree}\\ \hline $N$ is always globally tame (and, consequently, locally tame) \cite[Thm. 3.1.4]{GH06}. & $S_r$ is globally tame if and only if $S_r$ is locally tame if and only if $r \in \mathbb{N}$. (Theorem \ref{thm:cyclic semirings are no locally tame}). \\ \hline \rowcolor{blizzardblue} \multicolumn{2}{|c|}{Omega primality}\\ \hline $\omega(N)=\infty$ \cite[Prop.~2.1]{ACKT11}. & If $S_r$ is atomic and $r<1$, then $\omega(S_r) = \infty$ (Theorem~\ref{thm:cyclic semirings are no locally tame}).\\ \hline \end{tabular} \end{table} } \bigskip \section*{Acknowledgements} While working on this paper, the second author was supported by the UC Year Dissertation Fellowship. The authors are grateful to an anonymous referee for helpful suggestions. \bigskip
1202.1557
\section{Historical context} After the 1928 publication of Dirac's work on his relativistic theory of the electron \cite{dirac1}, Heisenberg immediately appreciated the significance of the new "hole theory" picture of the quantum vacuum of quantum electrodynamics (QED). Following some confusion, in 1931 Dirac associated the holes with positively charged electrons \cite{dirac2}: \begin{quote} {\it A hole, if there were one, would be a new kind of particle, unknown to experimental physics, having the same mass and opposite charge to an electron.} \end{quote} \noindent With the discovery of the positron in 1932, soon thereafter [but, interestingly, not immediately \cite{farmelo}], Dirac proposed at the 1933 Solvay Conference that the negative energy solutions [holes] should be identified with the positron \cite{dirac3}: \begin{quote} {\it Any state of negative energy which is not occupied represents a lack of uniformity and this must be shown by observation as a kind of hole. It is possible to assume that the positrons are these holes.} \end{quote} \noindent Positron theory and QED was born, and Heisenberg began investigating positron theory in earnest, publishing two fundamental papers in 1934, formalizing the treatment of the quantum fluctuations inherent in this Dirac sea picture of the QED vacuum \cite{heisenberg1,heisenberg2}. It was soon realized that these quantum fluctuations would lead to quantum nonlinearities \cite{heisenberg2}: \begin{quote} {\it Halpern and Debye have already independently drawn attention to the fact that the Dirac theory of the positron leads to the scattering of light by light, even when the energy of the photons is not sufficient to create pairs.} \end{quote} \noindent Halpern had published a brief note, without any details, stating that light-light scattering could occur, and Heisenberg attributes Debye with suggesting, in private discussions, the physical possibility of light-light scattering. Heisenberg set his student, Hans Euler, the task of studying light-light scattering using the density matrix formalism he had developed in \cite{heisenberg1,heisenberg2}. This work became Euler's PhD thesis \cite{eulerthesis} at Leipzig in 1936. A short paper in 1935, published with another of Heisenberg's students, Bernhard Kockel, gave the results for the light-light scattering amplitude in the low frequency limit \cite{eulerkockel}. Soon after the Euler-Kockel paper, Akhieser published [with Landau and Pomeranchuk] a brief note on the light-light scattering amplitude in the high frequency limit, work that became Akhieser's thesis under Landau's direction \cite{akhieser}. The Euler-Kockel paper made it clear that the quantum vacuum could be viewed as a medium: \begin{quote} {\it The connection between the quantities $\vec{B}$ and $\vec{D}$, on the one hand, and $\vec{E}$ and $\vec{H}$, on the other, is therefore nonlinear in this theory, since the scattering of light implies a deviation from the superposition principle.} \end{quote} \noindent They computed the leading quantum correction to the Maxwell Lagrangian: \begin{eqnarray} L=\frac{\vec{E}^2-\vec{B}^2}{2}+\frac{1}{90\pi}\frac{\hbar c}{e^2} \frac{1}{E_0^2}\left[ \left(\vec{E}^2-\vec{B}^2\right)^2+7\left(\vec{E}\cdot\vec{B}\right)^2\right]\quad , \label{quartic} \end{eqnarray} where they identified $E_0=e/(e^2/mc^2)^2$ as the (classical) "field strength at the electron radius". They interpreted this as vacuum polarization: \begin{eqnarray} \vec{D}&=&\vec{E}+\frac{1}{90\pi}\frac{\hbar c}{e^2} \frac{1}{E_0^2}\left[4 \left(\vec{E}^2-\vec{B}^2\right)\vec{E}-14\left(\vec{E}\cdot\vec{B}\right)\vec{B}\right]\nonumber\\ \vec{H}&=&\vec{B}+\frac{1}{90\pi}\frac{\hbar c}{e^2} \frac{1}{E_0^2}\left[4 \left(\vec{E}^2-\vec{B}^2\right)\vec{B}-14\left(\vec{E}\cdot\vec{B}\right)\vec{E}\right] \quad . \label{polarization} \end{eqnarray} Euler and Kockel also computed the light-light scattering interaction cross-section, for mean wavelength $\lambda$: \begin{eqnarray} Q\sim \left(\frac{e^2}{\hbar c}\right)^4\left(\frac{\hbar}{mc}\right)^4\frac{1}{\lambda^2} \quad , \label{cross} \end{eqnarray} and concluded that the effect was tiny: \begin{quote} {\it The experimental test of the deviation from the Maxwell theory is difficult since the noteworthy effects are extraordinarily small.} \end{quote} In modern language, Euler and Kockel studied QED vacuum polarization in the constant background field limit, obtaining the leading nonlinear corrections in powers of the field strengths. This was complementary to the work of Serber and Uehling \cite{serber}, at about the same time, who computed instead the corrections linear in the fields, but nonlinear in the space-time dependence of the background fields. Euler fully appreciated this distinction, in his thesis giving the general "effective field theory" form of the effective action as an expansion both in powers of the field strengths and in their derivatives. Euler and Kockel also commented on the formal similarity of the result (\ref{quartic}) to the work of Born and Infeld \cite{born}, who obtained similar nonlinear corrections to Maxwell theory, but from a classical perspective. In the Heisenberg-Euler paper \cite{he}, published in 1936, they extended the Euler-Kockel results in several significant ways. First, they obtained a closed-form expression for the full nonlinear correction to the Maxwell Lagrangian, a non-perturbative expression incorporating all orders in the (uniform) background electromagnetic field, presented in the abstract of their paper: \begin{eqnarray} {\mathcal L}&=&\frac{e^2}{h c} \int_0^{\infty}\hskip -5pt \frac{d\eta}{\eta^3} e^{-\eta} \left\{i \eta^2 (\vec{E}.\vec{B})\frac{\left[\cos\left(\frac{\eta}{ {\mathcal E}_c} \sqrt{\vec{E}^2-\vec{B}^2+2 i (\vec{E}.\vec{B})}\right)+{\rm c.c.}\right]}{\left[\cos\left(\frac{\eta}{ {\mathcal E}_c} \sqrt{\vec{E}^2-\vec{B}^2+2 i (\vec{E}.\vec{B})}\right)-{\rm c.c.} \right]} + {\mathcal E}_c^2+ \frac{\eta^2}{3} (\vec{B}^2-\vec{E}^2) \right\} \quad . \label{full} \end{eqnarray} Expanding this result to quartic order in a perturbative weak field expansion, one regains (\ref{quartic}), but (\ref{full}) is the full non-perturbative expression for the effective action. Significantly, they expressed the result not in terms of the classical quantity $E_0$, as in (\ref{quartic}), but in terms of the critical field strength ${\mathcal E}_c=\alpha\, E_0$: \begin{eqnarray} {\mathcal E}_c=\frac{m^2 c^3}{e\hbar}\approx 10^{16} {\bf V}/{\rm cm}\quad , \label{critical} \end{eqnarray} which had already been identified by Sauter \cite{sauter} [who attributes the idea to Bohr] as the field strength scale at which one would expect Dirac sea electrons to tunnel into the continuum, producing electron-positron pairs from vacuum. Having the full non-perturbative result, Heisenberg and Euler were able to identify this nontrivial prediction of positron theory: the instability of the QED vacuum when subjected to a classical electric field, leading to the production of electron-positron pairs. Heisenberg and Euler understood that background magnetic and electric fields lead to different physical effects \cite{he}: \begin{quote} {\it In the presence of only a magnetic field, the stationary states can be divided into those of negative and positive energy. ... The situation is different in an electric field. ... This difficulty is physically related to the fact that in an electric field, pairs of positrons and electrons are created. The exact analysis of this problem was performed by Sauter.} \end{quote} \noindent Indeed, picturing this electron-positron pair production process as a tunneling process from the Dirac sea, they used Sauter's exact solution of the Dirac equation in a constant electric field \cite{sauter} to estimate the rate of such a process as $\exp\left[-m^2 c^3 \pi/(\hbar e |E|)\right]$. Sauter had been a student in Munich, and after finding this exponential factor, stated \cite{sauter}: \begin{quote} {\it This agrees with the conjecture of N. Bohr that was given in the introduction, that one first obtains the finite probability for the transition of an electron into the region of negative impulse when the potential ramp ${\mathcal E} h/mc$ over a distance of the Compton wavelength $h/mc$ has the order of magnitude of the rest energy ... this case would correspond to around $10^{16} {\bf V}/{\rm cm}$.} \end{quote} \noindent This critical electric field value ${\mathcal E}_c$ in (\ref{critical}), is nowadays usually called the ``Schwinger critical field'' and serves as an estimate of the onset of the nonlinear QED region. Heisenberg and Euler's computation was a {\it tour de force}, working with exact solutions of the Dirac equation in a constant electric and magnetic field background, combining Euler-Maclaurin summations over the Landau levels with integral representations of the parabolic cylinder functions, in order to derive the closed-form integral representation (\ref{full}) of the effective action. Their starting point was simple \cite{he}: \begin{quote} {\it Due to relativistic invariance, the Lagrangian can only depend on the two invariants, ($\vec{E}^2-\vec{B}^2$) and $(\vec{E}\cdot\vec{B})$. The calculation of $U(\vec{E}, \vec{B})$ can be reduced to the question of how much energy density is associated with the matter fields in a background of constant fields $\vec{E}$ and $\vec{B}$.} \end{quote} \noindent Rewriting these invariants in terms of the eigenvalues of the field strength tensor, $a$ and $b$, where $a^2-b^2=(\vec{E}^2-\vec{B}^2)/{\mathcal E}_c^2$, and $a\, b=(\vec{E}\cdot\vec{B})/{\mathcal E}_c^2$, Heisenberg and Euler also wrote the effective action (\ref{full}) as \cite{he} \begin{eqnarray} {\mathcal L}&=& 4\pi^2 mc^2 \left(\frac{mc}{h}\right)^3 \int_0^{\infty}\frac{d\eta}{\eta^3} \,e^{-\eta} \left\{-a\, \eta\, {\rm ctg}(a \, \eta)\, b\, \eta\, {\rm cotanh}(b \, \eta) +1 +\frac{\eta^2}{3} (b^2-a^2) \right\}\, . \label{full2} \end{eqnarray} They noted \cite{he}: \begin{quote} {\it The integral (for $b=0$) around the pole $\eta=\pi/a$ has the value: $(-2i/\pi)4a^2mc^2(mc/h)^3\, e^{-\pi/a}$. This is the order of the terms which are associated with the pair creation in an electric field.} \end{quote} A third remarkable feature of the Heisenberg-Euler analysis is that they identified the physical significance of the subtraction terms in (\ref{full}, \ref{full2}). They noted that the first subtraction term corresponds to the subtraction of the (infinite) free-field effective action. They further realized that the other (logarithmically divergent) subtraction term had the functional form of the classical Maxwell action. Heisenberg was particularly interested in the physical significance of such logarithmically divergent terms, which now can be seen as an embryonic recognition of charge renormalization. Finally, the integral representation in (\ref{full}, \ref{full2}) has the form that we nowadays refer to as the "proper-time form", as discussed in the next section. Soon after the 1936 Heisenberg-Euler paper, Weisskopf presented a considerably simplified computation of the Heisenberg-Euler effective action, for both spinor and scalar QED, working directly from the spectrum of the Dirac [and Klein-Gordon] equation rather than from the eigenfunctions and density matrix. He also stated very clearly the new physical perspective \cite{weisskopf}: \begin{quote} {\it The electromagnetic properties of the vacuum can be described by a field-dependent electric and magnetic polarizability of empty space, which leads, for example, to refraction of light in electric fields or to a scattering of light by light.} \end{quote} \section{Proper-time formulation: Feynman and Schwinger} The next major developments in the subject came with work of Fock \cite{fock} and St\"uckelberg \cite{stueckelberg}, but this was not widely appreciated until the work of Feynman and Schwinger, who developed two different but complementary perspectives of the vacuum polarization problem in terms of proper-time evolution. Feynman was trying to extend his path integral representation of non-relativistic quantum mechanics to the relativistic positron theory of Dirac. Fock had formulated the Dirac equation in terms of proper-time evolution \cite{fock}, and St\"uckelberg \cite{stueckelberg} had proposed a physical picture in which positrons propagate backwards in real time, while proper-time evolves monotonically. Feynman \cite{feynman1,feynmanappendix,feynman3} proposed to represent a quantum transition matrix element as a path integral over all paths in four dimensional space-time, with the paths parameterized by a fifth parameter \cite{feynmanappendix}: \begin{quote} {\it We try to represent the amplitude for a particle to get from one point to another as a sum over all trajectories of an amplitude $\exp[i \,S]$ where $S$ is the classical action for given trajectory. To maintain the relativistic invariance in evidence the idea suggests itself of describing a trajectory in space-time by giving the four variables $x_\mu(u)$ as functions of some fifth parameter $u$ ... (somewhat analogous to proper time).} \end{quote} \noindent Feynman noted that in such a construction there would be trajectories that appear to go backwards in time, but motivated by work of St\"uckelberg, had the brilliant idea to identify the forward-in-time paths with electrons and the backward-in-time paths with positrons. He showed that this led to a consistent path integral formulation of positron theory, incorporating all the pair-creation and annihilation processes of QED as twists and turns of space-time paths. Nambu put it most eloquently \cite{nambu}: \begin{quote} {\it The time itself loses sense as the indicator of the development of phenomena; there are particles which flow down as well as up the stream of time; the eventual creation and annihilation of pairs that may occur now and then, is no creation and annihilation, but only a change of directions of moving particles, from past to future, or from future to past; a virtual pair, which, according to the ordinary view, is foredoomed to exist only for a limited interval of time, may also be regarded as a single particle that is circulating round a closed orbit in the four-dimensional theatre; a real particle is then a particle whose orbit is not closed but reaches to infinity.} \end{quote} Feynman noted the important contributions of Fock, St\"uckelberg and Nambu. Nambu had extended Fock's proper-time approach to the Klein-Gordon equation for scalar QED, and computed the path integral propagation amplitude for a constant background field. In this approach, the Klein-Gordon equation [for scalar QED] appears as a Schr\"odinger-like equation: \begin{eqnarray} i\frac{\partial}{\partial u}\phi=-\frac{1}{2}\left(i\frac{\partial}{\partial x_\mu}-A_\mu\right)^2\phi \equiv {\mathcal H}\,\phi \quad . \label{kg} \end{eqnarray} Studying a path integral representation of the corresponding amplitude for evolution in $u$, the amplitude $\langle x|e^{-i\,{\mathcal H}\, u}|y\rangle$, Feynman arrived at what is now known as the world line representation of the [scalar] QED effective action: \begin{eqnarray} \Gamma[A]=-\int_0^\infty \frac{du}{u}e^{-m^2 u}\int d^4 x\int_{x(u)=x(0)=x} {\mathcal D}x\, e^{-S[x]} \quad , \label{wl} \end{eqnarray} where $S[x]$ is the classical action for a charged scalar particle to propagate on a space-time trajectory $x^\mu(\tau)$ for total proper time $u$: \begin{eqnarray} S[x]=\int_0^u d\tau\left(\frac{1}{2}\left(\frac{dx^\mu}{d\tau}\right)^2+e\, A_\mu \frac{dx^\mu}{d\tau}\right) \quad . \label{action} \end{eqnarray} This worldline path integral representation (\ref{wl}) appeared in the appendix of one of Feynman's QED papers \cite{feynmanappendix}. Morette studied its mathematical basis \cite{morette}. However, Feynman's work on this first-quantized form of QED was largely unappreciated for many years. Interest was revived by the string theory work of Polyakov \cite{polyakov}, and the subsequent study of the field theory limit of string theory in the work of Bern and Kosower \cite{bernkosower}. Bern and Kosower showed that perturbative computations of scattering amplitudes in quantum field theory could be expressed in this first-quantized world line language, and in fact this leads to more efficient methods for certain multi-loop amplitudes \cite{bernkosower}. This idea has become a powerful modern method of computation in quantum field theory, from QCD to super-Yang-Mills, to supergravity. In fact, it also has a non-perturbative side, and this leads to a promising way to compute the non-perturbative vacuum pair-production probability, extending the result of Sauter and Heisenberg-Euler to inhomogeneous background electric fields \cite{dunne-eli}. Soon after Feynman's work, Schwinger published a seminal paper \cite{schwinger1} re-formulating the results of Heisenberg and Euler in the new language of renormalized QED. Schwinger also viewed the QED processes as evolution in proper-time, but instead of a path integral method, he used first an operator solution of the proper-time evolution, and later developed a formalism based on Fredholm determinants \cite{schwinger2}. Schwinger's ``On Gauge Invariance and Vacuum Polarization'' paper presents a careful treatment of the Green's functions, studying renormalization and gauge invariance \cite{schwinger1}: \begin{quote} {\it A renormalization of the field strength and charge, applied to the modified lagrange function for constant fields, yields a finite, gauge invariant result which implies nonlinear properties for the electromagnetic field in the vacuum.} \end{quote} \noindent He interprets the use of proper time as providing a gauge invariant regulator. This paper presents the exact result for the effective action for two special cases: first, the uniform background field treated by Heisenberg and Euler, and second the plane-wave background field, for which the Dirac equation had been solved by Volkov \cite{volkov}, and for which the effective action vanishes. Schwinger writes the effective Lagrangian in terms of the proper time evolution operator $U(s)$: \begin{eqnarray} {\mathcal L}(x)&=&\frac{1}{2}\, i\, \int_0^\infty \frac{ds}{s}\,\exp (-im^2 s)\, {\rm tr}(x|U(s)|x)\nonumber\\ U(s)&=&\exp(-i\, {\mathcal H}\, s)\qquad, \qquad {\mathcal H}=\Pi_\mu^2-\frac{1}{2}e\sigma_{\mu\nu}F_{\mu\nu}\quad , \label{schwinger} \end{eqnarray} noting that \cite{schwinger1} \begin{quote} {\it $U(s)$ may be regarded as the operator describing the development of a system governed by the `hamiltonian', ${\mathcal H}$, in the `time' $s$, the matrix element [$(x'|U(s)|x'')$] of $U(s)$ being the transformation function from a state in which $x_\mu(s=0)$ has the value $x_\mu^{\prime\prime}$ to a state in which $x_\mu(s)$ has the value $x_\mu^\prime$.} \end{quote} \noindent Using results of Fock \cite{fock} and Nambu \cite{nambu} for $U(s)$ for a constant background field strength, Schwinger presents expressions for the effective Lagrangian in complete agreement with the expressions of Heisenberg-Euler \cite{he} and Weisskopf \cite{weisskopf}. Furthermore, from the expression for a constant electric field ${\mathcal E}$, \begin{eqnarray} {\mathcal L}=\frac{1}{2}{\mathcal E}^2-\frac{1}{8\pi^2}\int_0^\infty \frac{ds}{s^3}\left[ e{\mathcal E}s\, {\rm cot}(e{\mathcal E}s)-1+\frac{1}{3}(e{\mathcal E}s)^2\right] \quad , \label{sch} \end{eqnarray} Schwinger extracted the full imaginary part, extending Heisenberg and Euler's derivation of the leading imaginary part, obtaining the instanton sum: \begin{eqnarray} 2\, {\rm Im}\, {\mathcal L}=\frac{\alpha^2}{\pi^2}\, {\mathcal E}^2\sum_{n=1}^\infty n^{-2}\, \exp\left(\frac{-n\, \pi\, m^2}{e\, {\mathcal E}}\right) \quad . \label{imag} \end{eqnarray} Soon afterwards, in number V of his series of six papers on QED, ``The Theory of Quantized Fields I-VI'', Schwinger formally defines the effective action in terms of the determinant of the Dirac operator \cite{schwinger2}: \begin{eqnarray} \Gamma =-i\, \ln\, \det\left(1-e\gamma A\, G_+^{(0)}\right)\quad , \label{det} \end{eqnarray} where $G_+^{(0)}=1/(-i\partial\hskip -5pt / +m +i\epsilon)$ is the free Feynman propagator. Developing the theory of such Fredholm determinants, Schwinger shows that the proper-time representation leads naturally to an expression of the integral representation form (\ref{full}) found by Heisenberg and Euler in the constant background field case. This work puts the derivation of Heisenberg and Euler on a firmer theoretical and computational basis, and has been the inspiration for much of the subsequent development of quantum field theory in background fields. \section{Scientific legacy of Heisenberg and Euler} The work of Heisenberg and Euler continues to have a profound impact even today, 75 years later. It is clear that they were well ahead of their time. Here I give a very abbreviated list of some developments that have come directly from their work. Of course it is not possible to be comprehensive in such a short space. Further details can be found in \cite{greiner,dr-qed,dittrichgies,dk}. \subsection{Light-light scattering} The full light-light scattering amplitude was computed some years later by Karplus and Neuman \cite{karplus}. The effect is perturbatively very small and has not yet been directly observed. On the other hand, the related vacuum polarization effect of Delbr\"uck scattering has been seen \cite{delbruck}. \subsection{Beta functions} Weisskopf studied further \cite{weisskopf2} the logarithmic divergences that had been identified by Heisenberg and Euler, and which we now associate with charge renormalization. In fact, nowadays this idea provides a direct approach to compute the $\beta$ function for the running charge, using the external field scale rather than an external momentum scale. This provides an interesting perspective of vacuum polarization \cite{pagels,fujikawa,hansson} and becomes a computationally powerful method at higher loops \cite{shifman}. \subsection{Vacuum pair production} While vacuum pair production was a definite, and quantitative, prediction in the Heisenberg-Euler \cite{he} paper [following the work of Sauter \cite{sauter}], the necessary electric field strength is so astronomical that it appeared out of experimental reach. After the discovery of lasers, the question was revisited by Br\'ezin \& Itzykson \cite{brezin} and Popov \& Marinov \cite{popov}, adapting to QED the seminal work of Keldysh \cite{keldysh} and collaborators in the theory of atomic ionization. Keldysh considered ionization not in a constant electric field but in a monochromatic time-dependent field $E(t)={\mathcal E}\,\cos(\omega\, t)$, defining a dimensionless "adiabaticity parameter" $\gamma$, being the ratio of the frequency $\omega$ to an inverse tunneling time. Remarkably, a WKB analysis of this ionization problem interpolates smoothly between the tunneling ionization regime where $\gamma\ll 1$, and the multi-photon ionization regime where $\gamma\gg 1$. Br\'ezin \& Itzykson \cite{brezin}, and Popov \& Marinov \cite{popov} applied a similar approach to the QED vacuum pair production problem, showing that the leading Sauter-Heisenberg-Euler exponential factor becomes [here $g(\gamma)$ is a simple known function] \begin{eqnarray} \exp\left[-\frac{\pi m^2c^3}{e\hbar {\mathcal E}}g(\gamma)\right]\sim \begin{cases} {\exp\left[-\frac{\pi m^2c^3}{e\hbar {\mathcal E}}\right]\qquad, \qquad \gamma\ll 1\cr \left(\frac{e {\mathcal E}}{m\omega}\right)^{4mc^2/\hbar\omega}\qquad, \qquad \gamma\gg 1 } \end{cases} \end{eqnarray} whose tunneling and multi-photon interpretations are self-evident. Since these fundamental works, there has been much theoretical work understanding this time-dependent pair production computation, using Bogoliubov transformations, quantum Vlasov equations, quantum kinetic theory, and semiclassical methods: for a recent review see \cite{dunne-eli}. The main outstanding question is how to increase the pair production rate by shaping the laser pulse appropriately. Unsolved problems concern understanding in a precise quantitative manner how the resulting predictions for the pair production rate would change when spatial focussing, back-reaction and cascading effects are included. These considerations have become more urgent, as recent work suggests that the peak electric field strength needed to observe some vacuum pair production may be lowered by one or two orders of magnitude \cite{dynamical,dipiazza,bulanov}, and this may be accessible in the not-too-distant future in several large-scale laser facilities \cite{ringwald,tajima,dunne-eli}. This, and related aspects of QED in ultra-strong laser fields, are discussed further in Tom Heinzl's talk at this conference. \subsection{Photon splitting, vacuum birefringence and axion searches} In 1970 two groups \cite{bial,adler} computed the rate of photon splitting in a strong magnetic field, a process that can be described using the Heisenberg-Euler effective action. The idea is to deduce the polarization tensor, from variation of the effective action, for the propagation of photons in a strong background field. Photon splitting has been experimentally measured now in \cite{splitting}. Further important progress came from the work of Tsai and Erber \cite{tsai}. Since the vacuum acts as a nonlinear medium, there can be both vacuum birefringence effects and also dichroism effects. These results have become a paradigm in the growing field of precision tests of QED in strong external fields. The PVLAS [Polarizzazione del Vuoto con Laser] experiment is discussed in detail in Guido Zavattini's talk at this conference. The original PVLAS experiment reported an unexpectedly strong signal, which was not found in a revised experiment. However, this negative result had an unanticipated, but extremely important, impact: it pushed people to study seriously the potential for axion searches using such strong-field experiments. There are now many such experiments [see the review by H. Gies \cite{gies}], and this has become a {\it bona fide} experimental field, complementary to astrophysical and accelerator searches. Some recent theoretical results are discussed in Felix Karbstein's talk at this conference. \subsection{Extensions of Heisenberg-Euler in QFT} The nonabelian version of the Heisenberg-Euler effective action, for covariantly constant background gauge fields, was computed by Brown and Duff \cite{brownduff}, putting it in the language of Coleman-Weinberg effective potentials \cite{cw}. This led to further developments in nonabelian theories, for example for the gluon field in work of Matinyan and Savvidy \cite{matinyan}. Using the Fock-Schwinger gauge, in which $x_\mu A_\mu=0$, Novikov, Shifman, Vainshtein and Zakharov \cite{novikov} presented a simple systematic procedure to derive the leading terms in the large mass expansion for QCD [essentially the analogue of the Euler-Kockel quartic terms in (\ref{quartic})]. Their approach used the determinant formulation of Schwinger \cite{schwinger2}, and they showed how this relates to the operator product expansion and QCD sum rules. A covariant expansion based on the heat kernel definition of the determinant was developed by Barvinsky and Vilkovisky \cite{barvinsky}, a method directly applicable to both gauge and gravitational theories. This work emphasized the importance of non-local terms in the full effective action. \subsection{Worldline approach to QFT} Feynman's worldline formalism for the effective action has become a powerful new computational tool for quantum field theory, not just at one-loop but also at higher loops. Christian Schubert discusses some aspects of this in his talk. The Bern-Kosower \cite{bernkosower,strassler,schubert} approach gives a new way of doing perturbative scattering amplitude computations, leading to many simplifications and also some surprisingly simple results in maximally supersymmetric theories, including supergravity \cite{bern}. \subsection{Effective actions in gravity and string theory} Quantum field theory in curved space-time was heavily influenced by the work of Heisenberg and Euler, as is seen in the fundamental work of De Witt \cite{dewitt}, Parker \cite{parker}, Zeldovitch \& Starobinsky \cite{starobinsky}, Candelas \cite{candelas}, Davies \cite{davies}, Dowker \cite{dowker} and others [the work of Stuart Dowker is reviewed by Klaus Kirsten in this conference]. Much of this work is based on an adaptation of Schwinger's proper-time formulation of the effective action, from gauge theories to curved space-time. Indeed, the associated heat kernel expansion is often called the Schwinger-De Witt expansion. The study of particle creation in cosmological models, as developed by Parker \cite{parker} and Zeldovitch \& Starobinsky \cite{starobinsky}, is closely related to the vacuum pair production problem discussed by Heisenberg and Euler. The path-integral formulation of Bekenstein and Parker \cite{parker} starts with a generalization to curved space-time of the proper-time construction of Feynman and Schwinger. Recent progress for gravitational effective actions is discussed at this conference in the talks by Ilya Shapiro and Alexei Starobinsky. A particularly direct application of the Heisenberg-Euler effective action is in the work of Duff and Isham \cite{duffisham}, explaining the connection between self-duality, helicity and supersymmetry \cite{grisaru,kallosh}. This can be seen immediately in the SUSY QED combination of spinor and scalar effective actions, which when expressed in terms of the helicity fields $F_\pm$, one sees (for e.g.) at the quartic level: \begin{eqnarray} {\mathcal L}_{\rm super}={\mathcal L}_{\rm spinor}+2{\mathcal L}_{\rm scalar}=\frac{\alpha^2}{12m^4}F_+^2\, F_-^2 \quad . \label{super} \end{eqnarray} This clearly vanishes when either of $F_\pm$ vanishes. Remarkably, a similar feature has been seen in the Heisenberg-Euler effective action, even at the two-loop level, for super-Yang-Mills theories \cite{kuzenko}. The effective action formalism, based on the determinantal form, was extended to string theory by Fradkin and Tseytlin \cite{fradkin}, and has become a corner-stone of string theory and gauge-gravity dualities. \subsection{Heisenberg-Euler effective action and zeta functions} In 1977 Hawking published an influential paper \cite{hawking} that introduces a definition of the effective action in terms of the zeta function of the relevant operator [Klein-Gordon, Dirac, ...]. This was based on mathematical work of Seeley. For an operator $D$ with spectrum $\lambda_n$, we formally define the zeta function as $\zeta(s)=\sum_n \lambda_n^{-s}$, so that \begin{eqnarray} \ln \det D=-\zeta^\prime(0)+\ln (\pi\mu^2/4)\zeta(0)\quad , \label{zeta} \end{eqnarray} where $\mu$ is a renormalization scale. Shortly after, Dittrich \cite{dittrich} showed that the Heisenberg-Euler effective action (\ref{full}) could be computed straightforwardly in this zeta function language, with the relevant zeta function being the Hurwitz zeta function, a generalization of the familiar Riemann zeta function \cite{ww}, essentially because the Landau-level-type spectrum is linear in an integer index. These ideas have been extended in many ways \cite{elizaldebook,klaus}, and a mathematically elegant way to express the results is in terms of the Barnes multiple gamma function \cite{ruijsenaars}. \subsection{Strong field QED: magnetic catalysis \& the chiral magnetic effect} As discussed by Maxim Chernodub at this conference, there has been a great deal of recent work analyzing the effect of strong electromagnetic fields on the strong interactions of QCD. This has been spurred by astrophysical considerations, with ultra-strong magnetic fields known to be present in astrophysical objects such as neutron stars, as well as recent results [both experimental and theoretical] from heavy ion collisions at RHIC. In such collisions, huge magnetic fields are generated by the ions, and these can lead to surprisingly large effects. For example, Kharzeev \cite{dima} has proposed that the observed asymmetry of particle correlations in such collisions \cite{star} may be explained using the ``chiral magnetic effect'', in which charged chiral fermions, associated with QCD vacuum fluctuations, are accelerated apart in a strong magnetic field due to the lowest Landau level projection onto definite spin. This is technically close to the magnetic catalysis mechanism of Gusynin, Miransky and Shovkovy \cite{miransky}, also based on the strong magnetic field limit of a Heisenberg-Euler-type computation, which makes important predictions for dynamical symmetry breaking in the presence of strong fields, and also for quantum transport and the quantum Hall effect in graphene. \section*{Acknowledgments} I thank the organizers, especially Manuel Asorey and Michael Bordag, for an excellent conference. I also thank Walter Dittrich for his input on matters of both history and physics, and D. H. Delphenich for providing unpublished translations of several relevant papers. I acknowledge support from the US DOE through grant DE-FG02-92ER40716.
1202.0839
\section{Introduction} Over the last two decades, as many as 2,500 articles on space-based gravitational-wave (GW) detection included mentions of LISA (the Laser Interferometer Space Antenna) \cite{lisasciencecase,Jennrich:2009p1398,lisaads}, the space-based GW interferometer planned and developed together by NASA and ESA. This collaboration between the two agencies ended in early 2011 for programmatic and budgetary reasons. In fact, LISA, as brought forth by the entirety of those papers, was more than a space project: it was the concept (and the cherished dream) of a space-based GW observatory that would explore the low-frequency GW sky, in a frequency band ($10^{-4}\mbox{--}1$ Hz) populated by millions of sources in the Galaxy and beyond: compact Galactic binaries; coalescing massive black holes (MBHs) throughout the Universe; the captures of stellar remnants into MBHs; and possibly relic radiation from the early Universe. All along its evolution, the LISA design remained based on three architectural principles developed and refined since the 1970s: a triangular spacecraft formation with Mkm arms, in Earth-like orbit around the Sun; the continuous monitoring of inter-spacecraft distance oscillations by laser interferometry; drag-free control of the spacecraft around freely falling test masses, the reference endpoints for the distance measurements, achieved using micro-Newton thrusters. The current incarnation of this concept is eLISA (evolved LISA), a mission under consideration by ESA alone (under the official name of NGO, the New Gravitational-wave Observatory) for launch in 2022 within the Cosmic Vision program. The eLISA design would achieve a great part of the LISA science goals, as presented in \cite{lisasciencecase}, and endorsed by the 2010 U.S.\ astronomy and astrophysics decadal survey \cite{national2010New}. This article reviews eLISA's science performance (sensitivity, event rates, and parameter estimation), as scoped out by these authors in the spring and summer of 2011, and as discussed in full in Ref.\ \cite{2012arXiv1201.3621A}. This article is organized as follows: in Sec.\ \ref{sec:elisa} we provide a very brief overview of eLISA and its GW sensitivity, while later sections are organized by science topics. In Sec.\ \ref{sec:compactbinaries}, we discuss the astrophysics of compact stellar-mass binaries in the Galaxy; in Sec.\ \ref{sec:massiveblackholes}, the origin and evolution of the massive BHs found at the center of galaxies, as studied through their coalescence GWs; in Sec.\ \ref{sec:emris}, the dynamics and populations of galactic nuclei, as probed through the captures of stellar-mass objects into massive BHs; in Sec.\ \ref{sec:gravity}, the fundamental theory of gravitation, including its behavior in the strong nonlinear regime, its possible deviations from general-relativistic predictions, and the nature of BHs; in Sec.\ \ref{sec:cosmo}, the (potentially new) physics of the early Universe, and the measurement of cosmological parameters with GW events. Last, in Sec.\ \ref{sec:conclusions} we draw our conclusions, and express a wish. \section{The eLISA mission and sensitivity} \label{sec:elisa} We refer the reader to \cite{2012arXiv1201.3621A} for a detailed description of the eLISA architecture. eLISA has a clear LISA heritage, with a few substantial differences. The eLISA arms will be shorter (1 Mkm), simplifying the tracking of distant spacecraft, alleviating requirements on lasers and optics, and reducing the mass of the propellant needed to reach the final spacecraft orbits. The orbits themselves may be slowly drifting away from Earth, again saving propellant, and the nominal mission duration will be two years, extendable to five. As much existing hardware as possible, including the spacecraft bus, will be incorporated from the LISA Pathfinder mission, scheduled for launch by ESA in 2014. The three spacecraft will consist of one ``mother'' and two simpler ``daughters,'' with interferometric measurements along only two arms, for cost and weight savings that make launch possible with smaller rockets than LISA. (Note that LISA was to be built with laser links along the three arms, but it was not a requirement that they would operate throughout the mission.) The eLISA power-spectral-density requirement for the residual test-mass acceleration is $S_\mathrm{acc}(f) = 2.13 \times 10^{-29} (1 + { 10^{-4}\,{\rm Hz} / f }) \, {\rm m}^2\,{\rm s}^{-4}\,{\rm Hz}^{-1}$, while the position-noise requirement breaks up into $S_\mathrm{sn}(f) = 5.25 \times 10^{-23} \; {\rm m}^2\,{\rm Hz}^{-1}$ for shot noise, and $S_\mathrm{omn}(f) = 6.28 \times 10^{-23} \; {\rm m}^2\,{\rm Hz}^{-1}$ for all other measurement noises. With these requirements, eLISA achieves the equivalent-strain noise plotted in Fig.\ \ref{fig:sensitivity}, and approximated analytically by \begin{equation} \quad \quad \quad S(f) = \frac{20}{3} \, \frac{4 \, S_{\rm acc}(f) / (2 \pi f)^4 + S_{\rm sn}(f) + S_{\rm omn}(f)}{L^2} \times \biggl( 1 + \Bigl( \frac{f}{0.41 \, c / 2 L} \Bigr) \biggr)^2, \label{eq:sens} \end{equation} where $L = 1$ Mkm, $c$ is the speed of light, and $S(f)$ has already been normalized to account for the sky-averaged eLISA response to GWs. At the frequency of best sensitivity ($\sim 12$ mHz), the eLISA noise would yield SNR = 1 for a constant-amplitude, monochromatic source of strain $3.6 \times 10^{-24}$ in a two-year measurement. The requirement on the useful measurement band is $10^{-4}$ Hz to 1 Hz, with a goal of $3 \times 10^{-5}$ Hz to 1 Hz. \begin{figure} \flushright \includegraphics[width=0.7\textwidth]{sensitivity.pdf} \caption{eLISA equivalent-strain noise, averaged over source sky location and polarization, as a function of frequency. The solid red curve was obtained with the LISACode 2.0 simulator \citep{petiteau:2008PhRvD..77b3002P}, while the dashed blue curve is plotted from Eq.\ \eqref{eq:sens}. For comparison, the dotted green curve shows the LISA sensitivity.} \label{fig:sensitivity} \end{figure} \section{Compact binaries in the Galaxy} \label{sec:compactbinaries} (See \cite{2009CQGra..26i4030N,marsh:2011CQGra..28i4019M} for deeper reviews.) The most numerous sources in the low-frequency GW sky observed by eLISA will be short-period binaries of two compact objects such as white dwarfs (WDs) or neutron stars (NSs). These systems have weak GW emission relative to the much heavier massive-BH binaries, but are numerous in the Galaxy and even in the Solar neighborhood. To date, astronomers have observed about 50 ultra-compact binaries with periods shorter than one hour, comprising both detached systems and interacting binaries where mass is being transferred from one star to the other. Wide-field and synoptic surveys such as SDSS and PTF (and in the future, PanSTARRS, EGAPS, and LSST) will continue to enlarge this sample \citep{rau:2010ApJ...708..456R,levitan:2011ApJ...739...68L}. Interacting ultra-compact binaries with NS accretors are found by all-sky X-ray monitors and in dedicated surveys \citep{jonker:2011ApJS..194...18J}. A large subset of known systems will be guaranteed \textbf{verification sources} for eLISA \citep{stroeer:2006:lvb}; their well-modeled GW signals will be detected within the first few weeks to months of operation, verifying instrument performance. The most promising verification binaries are the shortest-known-period interacting systems HM Cnc (with a period of 5.4 min \citep{roelofs:2010:se}), V407 Vul ($P = 9.5$ min \citep{2006ApJ...649..382S}), and ES Cet \citep{2011MNRAS.413.3068C} and the recently discovered detached system SDSS J0651+28 ($P = 12$ min \citep{brown:2011ApJ...737L..23B}). \begin{figure} \flushright \includegraphics[width=0.8\textwidth]{binaries} \caption{\textbf{Main figure}: power spectral density of the stochastic GW foreground from Galactic binaries, \emph{before} (blue) and \emph{after} (red) the subtraction of individually resolvable systems, which are plotted as green and red/blue dots (for detached and mass-transferring systems). A few known verification binaries are shown as white dots. The solid/dashed black curves trace instrument noise alone/with confusion noise. Spectra are shown for the observable ``$X$'' of Time Delay Interferometry (see, e.g., \cite{PhysRevD.72.042003}); subtraction is simulated for a two-year observation and threshold $\mathrm{SNR} = 7$; resolvable systems are placed a factor $\mathrm{SNR}^2$ above the combined instrument and confusion noise. \textbf{Inset}: time series of the residual foreground, which carries information about the number and distribution of binaries in the Galaxy.\label{fig:binaries}} \vspace{-8pt} \end{figure} eLISA will individually detect and determine the periods of \textbf{several thousand currently unknown compact binaries} (in our estimate, 3,500--4,100 systems for a two-year observation; \cite{2012arXiv1201.3621A,2012arXiv1201.4613N}), while the combined signals of tens of millions unresolvable systems will form a \textbf{stochastic GW foreground} at frequencies below a few mHz (\cite{yu:2010:gw,ruiter:2010ApJ...717.1006R}; see Fig.\ \ref{fig:binaries}.) About $\sim 500$ close or high-frequency ($> 10$ mHz) sources will be seen with large SNRs, allowing the determination of sky position to better than 10 $\mathrm{deg}^2$, of frequency derivative to 10\%, of inclination to 10 deg, and of distance to 10\%. This large sample will allow a detailed study of the Galactic population, which is poorly constrained by EM observations and theoretical predictions \citep{roelofs:2007:sdss}. Detections will be dominated by \textbf{double WD binaries} with the shortest periods (5--10 minutes). Their mergers are candidate progenitors for many interesting systems: type Ia \citep{pakmor:2010:sltIa} and peculiar supernovae \citep{perets:2010Natur.465..322P,waldman:2011ApJ...738...21W}; single subdwarf O and B stars, R Corona Borealis stars and maybe all massive WDs \citep{webbink:1984:dwd}; and possibly the rapidly spinning NSs observed as ms radio pulsars and magnetars \cite{levan:2006:grb}. These binaries are short lived, very faint for telescopes, and scarce (few thousand in the whole Galaxy), so GWs will provide a unique window on their physics. eLISA will determine their merger rate, constrain their formation, and illuminate the preceding phases of binary evolution, most notably the common-envelope phase. \textbf{Common-envelope evolution} is crucial to most binary systems that produce high-energy phenomena such as $\gamma$-ray bursts and X-ray emission, but our understanding of its physics and outcome is limited \citep{{taam:2000:cee,taam:2010NewAR..54...65T}} and challenged by observations \citep{nelemans:2005MNRAS.356..753N,demarco:2011MNRAS.411.2277D}. The standard scenario is as follows. Most stars in the Universe are in binaries, and roughly half of binaries are formed at close enough separations that the stars will interact as they evolve into giants or supergiants. Following runaway mass transfer, the companion of the giant can end up inside the outer layers (the envelope) of the giant; dynamical friction reduces the velocity of the companion, shrinking the orbit and transferring angular momentum and energy into the envelope; the envelope eventually becomes unbound, leading to a very compact binary consisting of the core of the giant and the original companion \citep{paczynski:1976:secbs}. eLISA will also test dynamical interactions in \textbf{globular clusters}, which produce an overabundance of ultra-compact X-ray binaries consisting of a NS accreting material from a WD companion. The eLISA angular resolution will be sufficient to distinguish WD binaries in clusters, verifying whether they are also plentiful. The eLISA measurements of individual short-period binaries will provide a wealth of information on the physics of tidal interactions and the stability of mass transfer. For detached systems with little or no interaction, the evolution of the GW signal is dominated by gravitational radiation: \begin{equation} \label{eq:fders} h \propto {\cal M}^{5/3} f^{2/3} D^{-1}, \quad \dot{f} \propto {\cal M}^{5/3} f^{11/3}, \quad \ddot{f} = \frac{11}{3} \frac{\dot f}{f}, \end{equation} where $h$ is the GW strain, $f$ the GW frequency, $\mathcal{M} = (m_1 m_2)^{3/5}/(m_1+m_2)^{1/5}$ is the chirp mass with $m_1$, $m_2$ the individual masses, and $D$ is the distance. Thus, measuring $h$, $f$, and $\dot{f}$ (which will be possible in 25\% of systems) provides $\mathcal{M}$ and $D$; measuring also $\ddot{f}$ (which may be possible for a few high-SNR systems) tests secular effects from tidal and mass-transfer interactions. Short-term variations are not likely to prevent detection \citep{stroeer:2009:stv}, and the precision of $\dot{f}$ and $\ddot{f}$ determination increases with the duration of the mission. \textbf{Tidal interactions} are possible when at least one binary component does not corotate with the orbital motion, or when the orbit is eccentric. Their strength is unknown \citep{marsh:2004:mtwd}, and has important consequences on the tidal heating (and possibly optical observability) of WD binaries, as well as the stability of \textbf{mass transfer}. This process begins after gravitational radiation shrinks detached binaries to sufficiently close orbits (with $P \sim$ a few minutes) that one of the stars fills its Roche lobe and its material can leak to the companion. Mass transfer can be self-limiting, stable, or unstable, depending on the resulting evolution of the orbit and of the donor radius. Unstable transfer leads to mergers; stable systems (the interacting WD binaries known as AM CVn systems, as well as ultra-compact X-ray binaries) will be observed -- and counted -- by eLISA in the early stages of mass transfer \citep{marsh:2011CQGra..28i4019M}. Efficient tidal coupling can return angular momentum from the accreted material to the orbit \citep{marsh:2004:mtwd,dsouza:2006:dmt,racine:2007:ndts}, slowing the inspiral and increasing the fraction of WD binaries that survive the onset of mass transfer from 0.2\% to 20\% \citep{nelemans:2001:ps2}. The \textbf{unresolved foreground} from Galactic binaries will provide an additional noise component for the detection of loud broadband signals (see the dashed line in Fig.\ \ref{fig:binaries}), but it also contains precious astrophysical information. Its overall level measures the total number of binaries (mostly double WDs); its spectral shape characterizes their history and evolution; and its yearly modulation \cite{edlund:2005:wdw}, together with the distance determinations from many individual systems, constrains the distribution of sources in the different Galactic components. Thus eLISA will probe dynamical effects in the Galactic center, which may increase the number of tight binaries \citep{alexander:2005:spmbh}; it will measure the poorly known scale height of the disk; and it will sample the population of the halo \citep{ruiter:2009:hwdb,yu:2010:gw}, which hosts two anomalous AM CVn systems and which may have a rather different compact-binary population than the rest of the Galaxy. Furthermore, the eLISA measurements of orbital inclinations for individual binaries, compared with the overall angular momentum of the Galaxy, will provide hints on the formation of binaries from interstellar clouds. eLISA will also constrain the formation rate and numbers of \textbf{NS binaries} and \textbf{ultra-compact stellar-mass BH binaries}, throughout the Galaxy and without EM selection effects. These numbers are highly uncertain, but as many as several tens of systems may be detectable by eLISA \citep{nelemans:2001:ps2,belczynski:2010:dco}, complementing the ground-based GW observations of these same systems in other galaxies (and at much shorter periods). More generally, the astrophysical populations and parameters probed by eLISA will be different from, and complementary to, what can be deduced from EM observations. For instance, eLISA will be sensitive to binaries at the Galactic center and throughout the Galaxy, while Gaia \cite{2001A&A...369..339P} will be limited to the Solar neighborhood; GWs encode distances and orbital inclinations, while EM emission is sensitive to surface processes. Dedicated observing programs and public data releases will allow simultaneous and follow-up EM observations of binaries identified by eLISA. \section{Massive black-hole binaries} \label{sec:massiveblackholes} (See \cite{2012arXiv1201.3621A} for a much deeper review.) According to the accretion paradigm \citep{Salpeter:1964,zeldovich:1964:rgw,krolik:1999:agn}, supermassive BHs of $10^6\mbox{--}10^9 \, M_\odot$ power quasars---active galactic nuclei so luminous that they often outshine their galaxy host, which are detected over the entire cosmic time accessible to our telescopes. \emph{Quiet} supermassive BHs are ubiquitous in our low-redshift Universe, where they are observed to have masses closely correlated with key properties of their galactic host (see \citep{gultekin:2009:ms}, and refs.\ therein) leading to the notion that galaxies and their nuclear MBHs form and evolve in symbiosis (see, e.g., \cite{dimatteo:2005:eiq,hopkins:2006:umm,croton:2006MNRAS.365...11C}). In the currently favored cosmological paradigm, regions of higher-density cold dark matter in the early Universe form self-gravitating halos, which grow through mergers with other halos and accretion of surrounding matter; baryons and MBHs are thought to follow a similar bottom-up \emph{hierarchical clustering} process \citep{white:1978MNRAS.183..341W,haiman:1998ApJ...503..505H,haehnelt:1998MNRAS.300..817H,wyithe:2002ApJ...581..886W,volonteri:2003:amh}. MBHs may be born as \emph{small seeds} ($10^2 \mbox{--} 10^3 \, M_\odot$) from the core collapse of the first generation of ``Pop III'' stars formed from gas clouds in light halos at $z \sim 15\mbox{--}20$ \citep{madau:2001:mbhIII,volonteri:2003:amh}; or as \emph{large seeds} ($10^3 \mbox{--} 10^5 \, M_\odot$) from the collapse of very massive quasi-stars formed in much heavier halos at $z \sim 10\mbox{--}15$ \citep{haehnelt:1993MNRAS.263..168H,loeb:1994:cbg}; or by runaway collisions in star clusters \citep{devecchi:2009ApJ...694..302D}; or again by direct gas collapse in mergers \cite{mayer:2010Natur.466.1082M} (See \citep{volonteri:2010A&ARv..18..279V,2012AdAst2012E..12S} and refs.\ therein). The seeds then evolve over cosmic time through intermittent, copious accretion and through mergers with other MBHs after the merger of their galaxies. The cosmic X-ray background from active MBHs at $z < 3$ suggests that radiatively efficient accretion played a large part in building up MBH mass \citep{marconi:2004:lsmbh,yu:2002:oc,soltan:1982:moq}, so information about the initial mass distribution is not readily accessible in the local Universe. By contrast, eLISA will measure the masses of the original seeds from their merger events. Furthermore, it is unknown \cite{volonteri:2007:bhs} whether accretion proceeds \emph{coherently} from a geometrically thin, corotating disk \citep{shakura:1973A&A....24..337S} (which can spin MBHs up to the $J/M^2 = 0.93\mbox{--}0.99$ limit imposed by basic physics \cite{thorne:1974:dabh,gammie:2004:bhse}) or \emph{chaotically} from randomly oriented episodes \citep{king:2006MNRAS.373L..90K} (which typically result in smaller spins). eLISA's accurate measurements of MBH spins will provide evidence for either mechanism \cite{berti:2008:cbhse}. After a galactic merger, the central MBHs spiral inward, together with their bulge or disc, under the action of dynamical friction, and \emph{pair} as a pc-scale Keplerian binary \citep{begelman:1980Natur.287..307B,chandrasekhar:1943:dfI,ostriker:1999ApJ...513..252O,colpi:1999ApJ...525..720C,mayer:2007Sci...316.1874M}; MBH binaries are then thought to \emph{harden} into gravitational-radiation--dominated systems by ejecting nearby stars (assuming a sufficient supply) \citep{quinlan:1996:dembhb,khan:2011ApJ...732...89K,preto:2011ApJ...732L..26P} or by gas torques and flows in gas-rich environments \citep{escala:2004:rgm,dotti:2007:smbh,cuadra:2009MNRAS.393.1423C}; the final binary \emph{coalescence} is the most luminous event in the Universe (albeit in GWs). BH mergers have been explored only recently by numerical relativity \citep{2011arXiv1107.2819S}, showing how the mass and spin of the final BH remnant arise from those of the binary components, and predicting remarkable physical phenomena such as large remnant recoils for peculiar spin configurations \citep{2011PhRvL.107w1102L}. The predicted coalescence rate in the eLISA frequency band ranges from a handful up to few hundred events per year, depending on theoretical assumptions (\citep{haehnelt:1994:lfg,wyithe:2003:lfg,sesana:2004:lfg,enoki:2004:gws,sesana:2005:gws,rhook:2005:rer,koushiappas:2006:tms,sesana:2007:imprint}). eLISA will be sensitive to GW signals from all three phases of MBH coalescence (inspiral, merger, and ring-down \citep{flanagan:1998:mgwa}). To assess the eLISA science performance in this area, after experimenting with different waveform families, we modeled these signals with the ``PhenomC'' phenomenological waveforms \citet{santamaria:2010PhRvD..82f4016S}, which stitch together post-Newtonian (PN) inspiral waves \citep{blanchet:2006:grp} with frequency-domain fits to numerically modeled late-inspiral and ringdown waves.% \begin{figure} \includegraphics[width=\textwidth]{MBH-SNR} \caption{\textbf{Left}: constant-level contours of sky- and polarization-averaged SNR for equal-mass non-spinning binaries as a function of total rest mass $M_\mathrm{tot}$ and cosmological redshift $z$. The SNR includes inspiral, merger and ringdown. \textbf{Right}: SNR contours as a function of $M_\mathrm{tot}$ and mass ratio $q = m_1/m_2$.\label{fig:mbhSNR}} \vspace{-8pt} \end{figure} The first metric of performance is the \textbf{detection SNR}, angle-averaged over sky position and source orientation, which is plotted in Fig.\ \ref{fig:mbhSNR} as a function of total rest mass and cosmological redshift (left panel) and as a function of total rest mass and mass ratio for binaries at $z = 4$ (right panel). eLISA covers almost all the mass--redshift parameter space of MBH astrophysics: any equal-mass binary with $M_\mathrm{tot} = 10^4\mbox{--}10^7 \, M_\odot$ (the crucial ``middleweight'' range inaccessible to EM observations beyond the local Universe) can be detected (with $\mathrm{SNR} > 10$) out to the highest redshifts, while equal-mass binaries with $M_\mathrm{tot} > 10^5 \, M_\odot$ are seen in detail as strong signals ($\mathrm{SNR} > 100$) out to $z = 5$. Binaries with $M_\mathrm{tot} > 10^5 \, M_\odot$ and mass ratios $\lesssim 10$ are seen with $\mathrm{SNR} > 20$ out to $z = 4$. To evaluate expected SNRs in the context of \textbf{realistic MBH populations}, we consider four fiducial scenarios (\textbf{SE}, \textbf{LE}, \textbf{SC}, \textbf{LC}) where MBHs originally form from \textbf{S}mall ($\sim 100 \, M_\odot$) or \textbf{L}arge seeds ($\sim 10^5 \, M_\odot$), and where they subsequently grow by \textbf{E}xtended or \textbf{C}haotic accretion. (See \citep{arun:2009:petf} for details; here we enhance that analysis by including random spin--orbit misalignments up to 20 deg in \textbf{E} models \citep{dotti:2010MNRAS.402..682D}). For each scenario we generate multiple catalogs of merger events, and join them in equal proportions into a single metacatalog. Figure \ref{fig:mbhSNRz} shows the resulting distribution of SNR with $z$: eLISA will detect sources with $\mathrm{SNR} \gtrsim 10$ out to $z \lesssim 10$, a limit imposed by masses of the expected binary population as a function of $z$. \begin{figure} \flushright \includegraphics[width=\textwidth]{MBH-SNRz} \caption{\textbf{Left}: distribution of expected SNR for MBH mergers as a function of $z$, computed from the \textbf{SE}/\textbf{LE}/\textbf{SC}/\textbf{LC} metacatalog (see main text). \textbf{Right}: likelihood for the mixing fraction $\mathcal{F}$, for an individual realization of mixed model $\mathcal{F}\, \mathbf{SE}+(1-\mathcal{F})\mathbf{LE}$ with $\mathcal{F}=0.45$ (see main text).\label{fig:mbhSNRz}} \end{figure} \begin{figure} \flushright \includegraphics[width=0.8\textwidth]{MBH-parest} \caption{Parameter-estimation accuracy (relative frequency of fractional or ab\-so\-lute errors over \textbf{SE}/\textbf{LE}/\textbf{SC}/\textbf{LC} metacatalog) for primary and secondary \emph{redshifted} MBH masses and dimensionless spins ($m_1$ and $m_2$, $a_1/m_1$ and $a_2/m_2$, respectively), luminosity distance $D_L$ and sky position $\Delta\Omega$.\label{fig:mbhparest}} \vspace{-8pt} \end{figure} For the same metacatalog, Fig.\ \ref{fig:mbhparest} shows the \textbf{expected accuracy of parameter determination}, estimated using a Fisher-matrix approach based on PN inspiral waveforms with spin-induced precession, augmented with PhenomC merger--ringdown waveforms to account for the final ``hang up'' behavior driven by the spin components aligned with the orbital angular momentum. eLISA can determine the \emph{redshifted} component masses ($m_\mathrm{redshift} = (1+z) \, m_\mathrm{rest}$) to $0.1\mbox{--}1$ \%; the primary-MBH spins to 0.01--0.1; and the secondary-MBH spins to 0.1 in a fraction of systems. (Compare with EM MBH-mass uncertainties $\sim 15\mbox{--}200$\%, except for the Milky Way MBH, and with very large MBH-spin uncertainties from K$\alpha$ iron line fits \cite{McClintock:2011zq}.) The errors in $D_L$ have a wider spread, from a few percent to virtual non-determination, while sky position $\Omega$ is typically determined to 10--1000 $\mathrm{deg}^2$. Compared to previous published estimates for LISA, the accuracy in determining both $D_L$ and $\Omega$ is reduced for eLISA by having interferometric measurements only along two arms (although three arms were always a goal, not a requirement, for LISA). The next order of analysis is to combine multiple MBH-coalescence observations, resulting in a catalog of binary/remnant parameters, into a single \textbf{inference about the mechanisms of MBH formation and evolution} throughout cosmic history. This problem was analyzed extensively by Sesana and colleagues \citep{sesana:2011:rmbh} in the context of LISA. We repeated their analysis for eLISA, by generating 1,000 catalogs of detected mergers (over two years) for each of the four \textbf{SE}/\textbf{LE}/\textbf{SC}/\textbf{LC} scenarios, and comparing the relative likelihood $p(A\,\mathrm{vs.}\,B) = p(A|C) / [p(A|C) + p(B|C)]$ for each pair of scenarios $(A,B)$, for $C = A$ or $B$. We considered only detections with $\mathrm{SNR} > 8$, and used spinless, restricted PN waveforms. Table \ref{tab:odds} shows our results for a relative likelihood threshold 0.95: for instance, the first row on the left shows that if \textbf{SE} is true, it \emph{could be discriminated} from \textbf{LE} and \textbf{LC} in 99\% of realizations, but from \textbf{SC} only in 48\% of realizations; the last row on the left shows that \textbf{LC} \emph{could not be ruled out} in 2\% of realizations when \textbf{SE} or \textbf{SC} are true, but in 22\% of realizations when \textbf{LE} is true. This degeneracy between accretion mechanisms is an artifact of the spin-less assumption; including information about the spin of the final merged MBH, which can be measured in 30\% of detections by way of quasinormal-mode ``spectroscopy'' \cite{berti:2006:gws}, provides essentially perfect discrimination. \begin{table} \caption{Model discrimination with eLISA MBH-binary observations. The upper-right half of each table shows the fraction of realizations in which the \emph{row} model would be chosen over the \emph{column} model with a likelihood threshold $> 0.95$, when the \emph{row} model is true. The lower-left half of each table shows the fraction of realizations in which the \emph{row} model cannot be ruled out against the \emph{column} model when the \emph{column} model is true. In the left table we consider only the measured masses and redshift for observed events; in the right table we include also the observed distribution of remnant spins.\label{tab:odds}} \small \flushright \begin{tabular}{lccccclcccc} \hline \hline &\multicolumn{4}{c}{without spins}&& &\multicolumn{4}{c}{with spins}\\ & \textbf{SE} & \textbf{SC} & \textbf{LE} & \textbf{LC} & & & \textbf{SE} & \textbf{SC} & \textbf{LE} & \textbf{LC} \\ \hline \textbf{SE} & $\times$ & 0.48 & 0.99 & 0.99 && \textbf{SE} & $\times$ & 0.96 & 0.99 & 0.99 \\ \textbf{SC} & 0.53 & $\times$ & 1.00 & 1.00 && \textbf{SC} & 0.13 & $\times$ & 1.00 & 1.00 \\ \textbf{LE} & 0.01 & 0.01 & $\times$ & 0.79 && \textbf{LE} & 0.01 & 0.01 & $\times$ & 0.97 \\ \textbf{LC} & 0.02 & 0.02 & 0.22 & $\times$ && \textbf{LC} & 0.02 & 0.02 & 0.06 & $\times$ \\ \hline \hline \end{tabular} \vspace{-8pt} \end{table} Last, because no theoretical model will exactly capture the ``true'' formation and evolution history of MBHs, we investigated eLISA's ability of measuring the \emph{mixing fraction} $0 < \mathcal{F} < 1$ in a \textbf{mixture model} $\mathcal{F} A + (1-\mathcal{F}) B$ that produces coalescence events with probability $\mathcal{F}$ from scenario $A$, and $1 - \mathcal{F}$ from $B$. For instance, for the case $\mathcal{F} \, \mathbf{SE} + (1-\mathcal{F}) \mathbf{LE}$ with $\mathcal{F} = 0.45$, $\mathcal{F}$ can be measured with an uncertainty of 0.1 (see right panel of Fig.\ \ref{fig:mbhSNRz}). Although highly idealized, this example shows the potential of eLISA's observations to constrain MBH astrophysics along their entire cosmic history, in mass and redshift ranges inaccessible to EM astronomy. In closing this section, we note that eLISA may also detect coalescences of BHs with masses of $10^2\mbox{--}10^4 \, M_\odot$ (intermediate-mass BHs, or IMBHs). These events do not result from hierarchical galaxy mergers, but they occur locally under the extreme conditions of star clusters. IMBHs may form in young clusters by way of mass segregation followed by runaway mergers \citep{portegieszwart:2000ApJ...528L..17P,guerkan:2004ApJ...604..632G,portegieszwart:2004Natur.428..724P,freitag:2006JPhCS..54..252F,freitag:2006MNRAS.368..121F}; IMBH binaries may form \emph{in situ} \citep{guerkan:2006ApJ...640L..39G}, or after the collision of two clusters \citep{amaro-seoane:2006ApJ...653L..53A,amaro-seoane:2010:mbhb}. Although the evidence for IMBHs is tentative \cite{miller:2004:imbh,miller:2009CQGra..26i4031M}, eLISA may observe as many as a few coalescences per year \citep{amaro-seoane:2006ApJ...653L..53A} out to a few Gpc \citep{santamaria:2010PhRvD..82f4016S}; it may also detect stellar-mass BHs plunging into IMBHs in the local Universe \citep{konstatantinidis:2011arXiv1108.5175K}. \section{Extreme-mass-ratio inspirals and the astrophysics of dense stellar systems} \label{sec:emris} There is of course one galactic nucleus, our own, that can be studied and imaged in great detail \citep{schoedel:2003:sdca,ghez:2003:fmsl,eisenhauer:2005:sinfoni,ghez:2005:sogc,ghez:2008:mdp,gillessen:2009:mso}. The central few parsecs of the Milky Way host a dense, luminous star cluster centered around the extremely compact radio source SgrA$^*$. The increase in stellar velocities toward SgrA$^*$ indicates the presence of a $(4 \pm 0.4) \times 10^{6} \, \mathrm{M}_\odot$ central dark mass \citep{gillessen:2009:mso}, while the highly eccentric, low-periapsis orbit of young star S2 requires a central-mass density $> 10^{13} \, M_\odot \, \mathrm{pc}^{-3}$ \citep{maoz:1998:dc}; a density $> 10^{13} \, M_\odot \, \mathrm{pc}^{-3}$ is also inferred from the compactness of the radio source \citep{genzel:2010RvMP...82.3121G}. These limits provide compelling evidence that the dark point-mass at SgrA$^*$ is an MBH \citep{maoz:1998:dc,genzel:2000MNRAS.317..348G,genzel:2006Natur.442..786G}. Unfortunately, the nearest large external galaxy is 100 times farther from Earth than SgrA$^*$, and the nearest quasar is 100,000 times farther, so probing other galactic centers is prohibitive. It will however become possible with eLISA. This is because MBHs are surrounded by a variety of stellar populations, including compact stellar remnants (stellar BHs, NSs, and WDs) that can reach very relativistic orbits around the MBH without being tidally disrupted \cite{amaro-soane:2007:tr}. The compact stars may plunge directly into the event horizon of the MBH; or they may spiral in gradually while emitting GWs. These latter systems, known as \emph{extreme-mass ratio inspirals} (EMRIs), will produce signals detectable by eLISA for MBH masses of $10^4\mbox{--}10^7\, M_\odot$. Stellar-mass BHs should be concentrated in cusps near MBHs \citep{sigurdsson:1997:csm,miralda-escoude:2000:bhgc,freitag:2006JPhCS..54..252F,freitag:2006:srg,hopman:2006:rrn} and generate stronger GWs thanks to their relatively larger mass, so they will provide most detections. EMRIs are produced when compact stars in the inner 0.01 pc of galactic nuclei are repeatedly scattered by other stars into highly eccentric orbits where gravitational radiation takes over their evolution \cite{amaro-soane:2007:tr}; \emph{resonant relaxation} caused by long-term torques between orbits increases the rate of orbit diffusion \citep{hopman:2006:ems,guerkan:2007:rr}, although relativistic precession can hinder this mechanism \citep{merritt:2011PhRvD..84d4024M}. EMRIs can also be made from the tidal disruption of binaries that pass close to the MBH \citep{miller:2005:bes}, possibly ejecting the hypervelocity stars observed in our Galaxy (see, e.g., \cite{brown:2009:asd}); and from massive-star formation and rapid evolution in the MBH's accretion disk \citep{levin:2007:ssmbh}. Different mechanisms will lead to different EMRI eccentricities and inclinations, evident in the GW signal \citep{miller:2005:bes}. The detection of even a few EMRIs will provide a completely new probe of dense stellar systems, characterizing the mechanisms that shape stellar dynamics in the galactic nuclei, and recovering information about the MBH, the compact object, and the EMRI orbit with unprecedented precision \citep{amaro-soane:2007:tr}. Especially coveted prizes will be accurate masses for $10^5\mbox{--}10^7 \, M_\odot$ MBHs in small, non-active galaxies, which will shed light on galaxy--MBH correlations at the low-mass end; MBH spins, which will illuminate the mechanism of MBH growth by mergers and accretion (see Sec.\ \ref{sec:massiveblackholes}); as well as stellar-BH masses, which will provide insight on stellar formation in the extreme conditions of dense galactic nuclei. The key to measurement precision is the fact that the compact object behaves as a test particle in the background MBH geometry over hundreds of thousands of relativistic orbits in a year; the resulting GW radiation encodes the details of both the geometry and the orbit \citep{ryan:1995:gwi,ryan:1997,barack:2007,finn:2000:gwc}. To assess the eLISA science performance on EMRIs, we model their very complicated signals \cite{2006CQGra..23S.769D} using the Barack--Cutler (BC) phenomenological waveforms \citep{barack:2004:lcs}, which are not sufficiently accurate for detection, but capture the character and complexity of EMRI waveforms. We complement this analysis with more realistic \emph{Teukolsky-based} (TB) waveforms obtained by solving the perturbative equations for the BH geometry in the presence of the inspiraling body \citep{teukolsky:1973:rbh}; these have been tabulated for circular--equatorial orbits and for some values of MBH spin \citep{finn:2000:gwc,gair:2009CQGra..26i4034G}. To evaluate expected EMRI detection horizons and detection rates, we perform a Monte Carlo over 500,000 realizations of the source parameters, taking MBH rest mass in $[10^{4},5\times 10^6]\, M_\odot$ with a uniform $\log M_\bullet$ distribution; MBH spin uniformly in $[0,0.95]$; compact-body mass of $10 \, M_\odot$, representative of a stellar-mass BH; orbit eccentricity before the final plunge uniformly in $[0.05,0.4]$; and all orbital angles and phases with the appropriate uniform distributions on the circle or sphere, with an equal number of prograde and retrograde orbits. We take the poorly known EMRI formation rate to scale with MBH mass as $400 \, \mathrm{Gyr}^{-1} (M_\bullet/3 \times 10^{6} \, M_\odot)^{-0.19}$ \cite{hopman:2009:emri,preto:2010:ApJ...708L..42P,amaro-seoane:2011CQGra..28i4017A}, and we distribute systems uniformly in comoving volume. Our assumptions are consistent with the MBH mass function derived from the observed galaxy luminosity function using the $M_\bullet\mbox{--}\sigma$ relation, and excluding Sc-Sd galaxies \citep{Aller:2002,gair:2004:ere,gair:2009CQGra..26i4034G}. We further assume an observation time of two years, consider EMRIs in the last five years of their orbit \citep{gair:2009CQGra..26i4034G}, and require a detection $\mathrm{SNR} = 20$ \cite{cornish:2011CQGra..28i4016C,gair:2008:cmh,babak:2010:mldc}. The left panel of Fig.\ \ref{fig:emrihorizon} shows the resulting \emph{maximum} horizon redshift for BC waveforms, as a function of MBH rest mass---that is, it shows the $z$ at which an optimally oriented source with the most favorable MBH and orbit parameters (as found in the Monte Carlo) achieves the detection SNR. Thus, EMRIs in the eLISA range will be detectable as far $z = 0.7$. By contrast, EM observations of $10^4\mbox{--}10^6 \, M_\odot$ MBHs are possible in the local Universe out to $z \simeq 0.1$. The right panel plots the distribution of SNRs as a function of $z$, which shows that nearby EMRIs in the local Universe will yield SNRs of many tens. For comparison, the left panel of Fig.\ \ref{fig:emrihorizon} shows also the horizons computed with sky- and orientation-averaged SNRs, using TB waveforms from circular--equatorial orbits with MBH spins $a_\bullet/M_\bullet = 0$ and $0.9$. The difference between the BC and TB curves is consistent with the effects of sky-averaging: SNRs for optimally oriented systems are expected to be 2.5 times higher than averaged SNRs. The $a_\bullet/M_\bullet = 0.9$ systems are favored because high MBH spin allows for orbits closer to the event horizon and higher GW frequencies, which shifts the peak eLISA sensitivity to higher masses. The resulting number of expected eLISA detections over two years is $\sim 50$, as evaluated with the BC-waveform Monte Carlo, and $\sim 30/35/55$ (for $a_\bullet/M_\bullet = 0/0.5/0.9$), as evaluated with TB-waveform sky-averaged horizons. The higher TB event rate is explained by the inclusion of eccentric systems, which radiate more energy in the eLISA band, and it should be more reliable because of the broad sampling of source parameters. Remember however that EMRI rates are highly uncertain \cite{amaro-soane:2007:tr,hopman:2009:emri,preto:2010:ApJ...708L..42P,merritt:2011PhRvD..84d4024M}. Even with as few as 10 events, the slope of the MBH mass function in the $10^4\mbox{--}10^6 \, M_\odot$ range can be determined to 0.3, the current level of observational uncertainty \cite{gair:2010:emri}. \begin{figure} \flushright \includegraphics[width=0.8\textwidth]{emrisnr} \caption{\textbf{Left}: maximum detection horizon redshift vs.\ MBH rest mass, BC EMRI waveforms (red curve); averaged horizon redshift vs.\ MBH rest mass, TB EMRI waveforms with $a_\bullet/M_\bullet = 0$ and $0.9$. Assumptions are given in the main text; the maximum is computed as the highest $z$ with $\mathrm{SNR} > 20$ in a given mass bin. \textbf{Right}: maximum EMRI SNR vs.\ redshift, BC waveforms.\label{fig:emrihorizon}} \end{figure} \begin{figure} \flushright \includegraphics[width=0.8\textwidth]{emrierror} \caption{Posterior probability plot for source parameters (MBH rest mass $M_\bullet$, MBH spin $a_\bullet$, compact-body mass $m$, and orbit eccentricity at plunge $e$), in the $\mathrm{SNR} = 25$ detection of a $10 + 10^6 \, M_\odot$ EMRI at $z = 0.55$, with $a_\bullet/M_\bullet = 0.7$ and $e_\mathrm{plunge} = 0.25$. \label{fig:emrimcmc}} \end{figure} Because EMRI waveforms are such complex and sensitive functions of the source parameters, these will be estimated accurately whenever an EMRI is detected \citep{cornish:2011CQGra..28i4016C,gair:2008:cmh,babak:2010:mldc}. In particular, we expect to measure the MBH mass and spin, as well as the compact-body mass and eccentricity to better than a part in $10^3$ \citep{barack:2004:lcs}. As an example, Fig.\ \ref{fig:emrimcmc} shows the posterior distributions of the best-determined parameters for a $z = 0.55$ source detected by eLISA with $\mathrm{SNR} = 25$, as computed with the Markov Chain Monte Carlo algorithm of \citep{cornish:2006:mes}; for this source, the luminosity distance $D_L$ would be determined to 1\%, and the sky location to 0.2 $\mathrm{deg}^2$. Even with relatively low SNR, parameter-estimation accuracy is excellent. In general, we find that the eLISA and LISA parameter-estimation performance is very similar for EMRIs detected with the same SNR (but of course different distances), so the reader can refer to treatments for LISA in the literature \citep{barack:2004:lcs,huerta:2009PhRvD..79h4021H,porter:2009GWN.....1....4P,babak:2010:mldc}. \section{Precision measurements of strong gravity} \label{sec:gravity} Einstein's theory of gravity, general relativity (GR), has been tested rigorously in the Solar system and in binary pulsars \citep{will:2006:gre,lorimer:2008:bmp}; these tests, however, probe only the weak-field regime where the characteristic perturbative parameter $\epsilon = v^2/c^2 \sim G M / (R c^2)$ is very small, $\sim 10^{-6}\mbox{--}10^{-8}$ (here $v$ is the velocity of gravitating bodies, $M$ their mass, and $R$ their separation). By contrast, eLISA's GW observations of coalescing MBHs (Sec.\ \ref{sec:massiveblackholes}) and of EMRIs (Sec.\ \ref{sec:emris}) will allow us to confront GR with precision measurements of its dynamical, strong-field regime, and to verify that astrophysical BHs are really the Kerr mathematical solutions predicted by GR. Before considering the GR tests possible with each of these sources, we note that, by the second half of this decade, second-generation ground-based detectors are expected to routinely observe the coalescences of stellar-mass BHs and (possibly) of asymmetric systems such as a NS inspiraling into a $100 \, M_\odot$ BH. However, they will do so with 10--100 times lower SNRs than eLISA (for the brightest sources), and for up to 1,000 times fewer GW cycles; thus, eLISA will test our understanding of gravity in the most extreme conditions with a precision that is two orders of magnitude better than that achievable from the ground. (Although most of the references cited in the rest of this section were developed for LISA, their broad conclusions are applicable to sources detected with comparable SNRs by eLISA.) All three phases of MBH coalescence offer opportunities for precision measurements. The year-long \textbf{inspiral signals} can be examined for evidence of a massive graviton, resulting in a frequency-dependent phase shift of the waveform, improving current Solar-system bounds \citep{2011PhRvD..84j1501B,huwyler:2011arXiv1108.1826H}; they can yield stringent constraints on other theories with deviations from GR parametrized by a set of global parameters, such as massless and massive Brans-Dicke theories \citep{berti:2005:esb,2012PhRvD..85f4041A}, theories with an evolving gravitational constant \citep{yunes:2010PhRvD..81f4018Y}, Lorentz-violating modifications of GR \citep{2012PhRvD..85b4041M}; last, various authors have considered testing inspiral waves for hypothetical, generic modifications of their amplitude and phasing \citep{arun:2006:pns,yunes:2009PhRvD..80l2003Y,cornish:2011PhRvD..84f2003C,2012PhRvD..85h2003L}. The \textbf{merger} of comparable-mass MBH binaries produces an enormously powerful GW burst, which eLISA will measure with SNR as high as a few hundred, even at cosmological distances. The MBH masses and spin can be determined with high accuracy from the inspiral waveform; given these physical parameters, numerical relativity can predict the shape of the merger waveform, as well as the mass and spin of the final remnant MBH \citep{rezzolla:2008ApJ...674L..29R}, and these can be compared directly with observations, providing an ideal test of pure GR in a highly dynamical, strong-field regime. The frequencies and damping times of the quasinormal modes (QNMs) in the final \textbf{ringdown} \cite{berti:2009CQGra..26p3001B} are completely determined by the mass and the spin of the remnant, and therefore can be used to measure them \cite{berti:2006:gws,berti:2007PhRvD..76j4044B}, while their relative amplitudes hold information about the pre-merger binary \citep{2012PhRvD..85b4018K}, again providing a check of consistency between GR predictions for the phases of coalescence. Furthermore, the measurement of at least two QNMs \citep{berti:2007PhRvD..76j4044B} will test the Kerr-ness of the MBH \citep{dreyer:2004:bhs} against exotic proposals such as boson stars and gravastars \citep{yoshida:1994PhRvD..50.6235Y,berti:2006:boson,chirenti:2007CQGra..24.4191C,pani:2009PhRvD..80l4047P}. Modifications of GR that lead to different emission would also be apparent \citep{barausse:2008xv,pani:2009PhRvD..79h4031P}. \textbf{EMRIs} are expected to be very clean astrophysical systems, except perhaps in few systems with strong interactions with the accretion disk \citep{barausse:2007PhRvD..75f4026B,barausse:2008PhRvD..77j4027B,kocsis:2011PhRvD..84b4032K}, or with perturbations due to a second nearby MBH or star \citep{yunes:2011PhRvD..83d4030Y,2012ApJ...744L..20A}. Over day-long timescales, EMRI orbits are essentially geodesics of the background geometry; on longer timescales, the loss of energy and angular momentum to GWs causes a slow change of the geodesic parameters. In the last few years of their evolution, as observed by eLISA, EMRI orbits are highly relativistic ($R < 10\, R_\bullet$) and display extreme forms of periastron and orbital plane precession. Indeed, EMRI GWs encode all the mass and current multipoles of the MBH \citep{ryan:1995:gwi,drasco:2004:rbh}, which for a Kerr BH are uniquely determined by the mass and spin alone (another manifestation of the ``no-hair'' theorem). For EMRIs with $\mathrm{SNR} = 30$, eLISA will measure mass and spin to a part in $10^3\mbox{--}10^4$, and the mass quadrupole moment $M_2$ to a part in $10^2\mbox{--}10^4$, thus \textbf{testing the no-hair theorem} directly \cite{barack:2007}. See \citep{sopuerta:2010:emri,babak:2011CQGra..28k4001B} for reviews of different ways to test the nature of astrophysical BHs. Other tests of the Kerr-ness of the central massive object have been proposed: for a boson star, the EMRI signal would not shut off after the last stable orbit \citep{kesden:2005:gws}; for a gravastar, QNMs could be excited resonantly \citep{pani:2009PhRvD..80l4047P}; for certain non-Kerr axisymmetric geometries, orbits could become ergodic or experience resonances \citep{gair:2008:pbh,lukes-gerakopoulos:2010PhRvD..81l4005L}; for ``bumpy'' BHs, orbits would again carry distinctive signatures \citep{ryan:1995:gwi,collins:2004:tfm,glampedakis:2006:msl,vigeland:2011PhRvD..83j4027V}. Modifications in EMRI GWs would also arise if the \textbf{true theory of gravity is in fact different from GR}, as are dynamical Chern-Simons theory \citep{sopuerta:2009PhRvD..80f4006S,pani:2011PhRvD..83j4048P}, scalar--tensor theories (with observable effects in NS--BH systems where the NS carries scalar charge \citep{berti:2005:esb,yagi:2010PhRvD..81f4008Y}), Randall--Sundrum-inspired braneworld models \citep{mcwilliams:2010PhRvL.104n1601M,yagi:2011PhRvD..83h4036Y}, theories with axions that give rise to ``floating orbits'' \citep{Cardoso:2011xi,2012PhRvD..85j2003Y}, as well as generic, phenomenologically parametrized theories \citep{gair:2011PhRvD..84f4016G}. \section{Cosmology and new physics from the early Universe} \label{sec:cosmo} GWs produced after the Big Bang form a fossil radiation: expansion prevents them from reaching thermal equilibrium with the other components because of the weakness of the gravitational interaction. Thus, relic GWs carry information about the first instants of the Universe. If their wavelength is set by the apparent horizon size $c/H_* = c (a/\dot{a})_*$ at the time of production, when the temperature of the Universe is $T_*$, the redshifted frequency is \begin{equation} f \approx 10^{-4}\,\mathrm{Hz} \, \sqrt{H_* \times \frac{1 \, \mathrm{mm}}{c}} \approx 10^{-4}\,\mathrm{Hz} \left(\frac{k_B T_*}{1 \, \mathrm{TeV}}\right), \end{equation} so the eLISA frequency band corresponds to the horizon at and beyond the \emph{Terascale frontier} of fundamental physics. This allows eLISA to probe bulk motions at times about $3 \times 10^{-18}\mbox{--}3 \times 10^{-10}$ s after the Big Bang, a period not directly accessible with any other technique. Taking a typical broad spectrum into account, eLISA has the sensitivity to detect cosmological backgrounds caused by \emph{new physics} at energies $\sim 0.1\mbox{--}1000\,\mathrm{TeV}$, if more than a (modest) fraction $\sim 10^{-5}$ of the energy density is converted to GWs at the time of production. Various sources of \textbf{cosmological GW backgrounds} are presented in detail in \cite{2012JCAP...06..027B}. They include first-order phase transitions, resulting in bubble nucleation and growth, and subsequent bubble collisions and turbulence \citep{witten:1984:csp,hogan:1986:grc,Kamionkowski:1993fg,Huber:2008hg,Caprini:2009yp}; the dynamics of stabilization for the extra dimensions required by superstring theory \citep{hogan:2000:gwm,randall:2006:gww}, which may also appear as non-Newtonian gravity in laboratory experiments at the sub-mm scale; networks of cosmic (super-)strings \citep{copeland:2004JHEP...06..013C,1994csot.book.....V}, which continuously produce loops that decay into GWs (see Fig.\ \ref{fig:strings}); the transition between inflation and the hot Big Bang in the process of preheating \citep{khlebnikov:1997:rgw,easther:2006:sgw,GarciaBellido:2007dg,Dufaux:2007pt,Dufaux:2008dn}; and the amplification of quantum vacuum fluctuations in some unconventional versions of inflation \cite{brustein:1995:rgw,buonanno:2003:tasi,buonanno:1997:srg}. Although the two-arm eLISA does not provide a Sagnac observable \citep{hogan:2001:esg} to calibrate instrument noise against possible GW backgrounds, the clear spectral dependence predicted for some of these phenomena provides an observational handle, as long as the background lies above the eLISA sensitivity curve.% \begin{figure} \flushright \includegraphics[width=0.7\textwidth]{strings.pdf} \caption{(From \cite{2012JCAP...06..027B}.) Spectra of stochastic backgrounds from cosmic strings for large loops (with horizon size $\alpha = 0.1$, solid lines), for two values of the string tension $G\mu/c^4$ spanning a range of scenarios motivated by braneworld inflation; and for small loops (with size $\alpha = 50 \epsilon G \mu$, dashed line). The cosmic-string spectrum is distinguishably different from that of first-order phase transitions or any other predicted source: it has nearly constant energy per logarithmic frequency interval over many decades at high frequencies, and falls off after a peak at low frequencies, since large string loops are rare and radiate slowly. Cosmic strings may also produce distinctive bursts, produced by a sharply bent bits of string moving at nearly the speed of light \citep{damour:2005:grc,siemens:2006:gwb,binetruy:2010PhRvD..82l6007B,bohe:2011PhRvD..84f5016B}. \label{fig:strings}} \end{figure} As discussed in Sec.\ \ref{sec:massiveblackholes}, observations of GWs from MBH binaries probe the assembly of cosmic structures. In addition, binaries can serve as \emph{standard sirens} to \textbf{measure cosmological parameters} \citep{schutz:1986:hgw,holz:2005:ugw} because, as discussed around Eq.\ \eqref{eq:fders}, measuring the amplitude and frequency evolution of a binary signal yields the absolute luminosity distance to the source. However, binary GWs cannot provide the source's redshift unless the other source parameters are known independently (because the rest mass of the binary is the only length/time scale in the waveform, the frequency evolution of a redshifted signal is indistinguishable from the signal from a heavier binary). The optical redshift of the host galaxy can be obtained if an EM counterpart to MBH coalescence is observed (see, e.g., \cite{armitage:2002ApJ...567L...9A,milosavljevic:2005:amb,phinney:2009astro2010S.235P}, and \citep{schnittman:2011CQGra..28i4021S} for a recent review). While there are many uncertainties in the nature and strength of such counterparts, some may be observable in the local Universe. At $z < 1$, we expect that eLISA MBH-inspiral measurements could provide sky locations to better than 400 $\mathrm{deg}^2$ for 50\% of sources, and to 10 $\mathrm{deg}^2$ for 11\%. (The inclusion of merger and ringdown in the analysis should further improve these numbers.) Such large areas will be covered frequently and deeply by optical and radio surveys such as LSST \citep{lsst:2009arXiv0912.0201L} and the VAST project \citep{johnston:2007PASA...24..174J}, identifying sufficiently distinctive transients. The accurate knowledge of the counterpart's redshift and position would then improve the uncertainty of GW-determined parameters, with $D_L$ known to 1\% for 60\% of sources, and 5\% for 87\%. Such precise luminosity distance--redshift measurements will be complementary to other cosmographical campaigns \citep{riess:1998ApJ...504..935R,perlmutter:1999AIPC..478..129P}, and will improve the estimation of cosmological parameters. Even without counterparts, one may proceed by considering all possible hosts in a distance--position error box, and enforcing consistency between multiple GW events \citep{petiteau:2011ApJ...732...82P}; this should be possible for MBH binaries (and EMRIs \citep{macleod:2008:phc}) in the local Universe, yielding the Hubble constant to a few percent. \section{Conclusions} \label{sec:conclusions} While LISA was always meant to be the definitive mission in its frequency band, eLISA is being designed to provide the maximum science within a cost cap. Nevertheless, as described above, eLISA will achieve a great part of the LISA science goals. It will represent the culmination of twenty years of exciting, painstaking work, pioneering the new science of observational low-frequency GW astronomy. It will truly begin to unveil the hidden, distant Universe. May it fly soon, and safe. \ack This research was supported by the Deutsches Zentrum f\"ur L\"uft- und Raumfahrt and by the Transregio 7 ``Gravitational Wave Astronomy'' financed by the Deutsche Forschungsgemeinschaft DFG (German Research Foundation). EB was supported by NSF Grant PHY-0900735 and by NSF CAREER Grant PHY-1055103. AK was supported by the Swiss National Science Foundation. TBL was supported by NASA Grant 08-ATFP08-0126. RNL was supported by an appointment to the NASA Postdoctoral Program at the Goddard Space Flight Center, administered by Oak Ridge Associated Universities through a contract with NASA. MV performed this work at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Copyright 2012. \section*{References}
1202.0991
\section{Introduction} There are several motivations to pay attention to singular holomorphic foliations defined on complex surfaces and admitting an invariant closed positive current. First the study of these foliations has a natural ergodic theoretic interest. Additional motivation comes from observing that this class of foliations captures and unifies special types of foliations such as those having a compact leaf, foliations having a Zariski-dense leaf isomorphic to a quotient of $\C$ and Hilbert modular foliations. It follows that the understanding of these foliations has consequences in a variety of domains. The purpose of this paper is to set up a dynamical method to study this type of foliations. The paper is essentially elementary in that only the basic properties of this method are considered in detail. Nonetheless, it will soon be apparent that the problem splits naturally into two cases corresponding to the two sources of known examples of foliations admitting invariant closed currents, namely foliations having a compact leaf and foliations carrying a transverse riemannian structure. Once this splitting will have been established in our context, we shall pursue the case leading to the existence of a compact leaf. The other case, for which more subtle dynamical arguments will be required, is going to be treated in the future. Let us give a brief description of the point of view adopted here. The starting point is a construction appearing in \cite{blm} which is revisited in Section~2.3. Essentially the construction consists of observing that, inside the leaves of $\fol$, there are real one-dimensional trajectories along to the which the holonomy of $\fol$ ``tends to be contractive'' so long they stay away from the singularities of $\fol$. These trajectories can be viewed as defining a singular real one-dimensional foliation denoted by $\calh$ (cf. Section~2.3 for details). Similar ideas already appeared in the context of foliated characteristic classes, see \cite{ghys-bourbaki} and its references. The natural idea is then to ``follow'' these trajectories and to understand how an invariant closed positive current can be compatible with the resulting contraction of the holonomy of $\fol$. However, when this idea is further elaborated, the above mentioned dichotomy manifests itself in the existence or absence of trajectories of $\calh$ contained in the support of the current in question and possessing ``infinite length'' (cf. Section~5). If trajectories of infinite length do exist, then they must yield some ``definite amount of contraction'' in the holonomy pseudo-group of $\fol$. This clearly poses a serious obstruction to the existence of the mentioned currents. The tension between the existence of the invariant current and the existence of ``contractions'' in the holonomy pseudogroup is likely to imply that the invariant current has a trivial nature: it is concentrated on a compact leaf of $\fol$. On the other hand, it may well happen that all the trajectories of $\calh$, or at least those contained in the support of the current, are of ``finite length''. In vague terms, this means that every trajectory has ``extremities'' or ``ends'' so that they cannot be followed ``for an arbitrarily large period of time''. This prevents us from ensuring the existence of ``contractions'' in the holonomy pseudo-group of $\fol$. Simple examples where this phenomenon is observed arise in the context of transversely riemannian foliations whose discussion will be postponed to a subsequent work. The contents of this article is then somehow bipartite. It begins with the general definition of the foliation $\calh$ consisting of the mentioned real trajectories and it continues with the analysis of their behavior near the singularities of $\fol$ that may exist in the support of a (invariant closed positive) current. Then we turn to the global geometry of these trajectories. After some general reductions and definitions, we shall finally be confronted with the basic dichotomy mentioned above: either there are trajectories inside the support of the current having ``infinite length'' or all these trajectories have finite length. The remainder of the paper will then be devoted to the investigation of the first possibility, i.e. of the case in which there are trajectories of infinite length contained in the support of our current. The corresponding results are ultimately summarized by Theorem~\ref{fim1} (see also Theorem~A below). The statement of this theorem is as follows: unless all the $\calh$-trajectories contained in the support of the current have a uniformly bounded length, then this support contains a compact leaf of $\fol$. Although the terminology is at this point imprecise, we can made explicit the contents of this theorem as follows: \begin{itemize} \item at least as far as compact leaves are concerned, we only need to study those foliations for which there is a compact invariant set where all the $\calh$-trajectories have finite length. \item if, by some reason, we can guarantee the existence of $\calh$-trajectories of infinite length in the support of an invariant closed current $T$, then this support contains a compact leaf. \end{itemize} As to the second item, some non-trivial examples in which it is possible to prove {\it a priori}\, that no trajectory of $\calh$ on $M$ is of finite length will be supplied at the very end of this article. These examples include some foliations on elliptic $K3$ surfaces having singularities that either are hyperbolic or belong to the domain of Siegel. It will be seen that for these examples, the cohomology class of an invariant closed current $T$ giving no mass to individual leaves must have trivial self-intersection (and thus it has the same cohomology of an elliptic fiber). Nonetheless it is not {\it a priori}\, clear that the foliation in question needs to have any compact leaf at all. Another interesting class of examples includes foliations in the projective plane having singularities that may contribute non-trivially to the Lelong numbers of $T$, so as to allow the foliated current $T$ to have strictly positive self-intersection. For example, consider a foliation $\fol$ on $\C P(2)$ having a radial singularity $p$ and such that their remaining singularities belong to the Siegel domain or are hyperbolic. By a radial singularity, it is meant that $\fol$ is given in suitable coordinates about $p$ by the $1$-form $xdy -ydx$. If the degree of this foliation is at least~$3$, then we can arrange for the trajectories of $\calh$ to have infinite length. Similarly if the degree of the foliation is at least~$4$ then the same result applies to foliations having up to~$2$ radial singularities (or more generally two singularities whose eigenvalues are $1, \lambda$ with $\lambda \in \R_+^{\ast}$). Thus the results of this paper can be applied to these foliations to yield, in particular, the existence of algebraic curves invariant for them. More details on the construction of examples can be found at the end of Section~7. The study of the dynamics of these ``contractive trajectories'' is likely to have interest in different problems about holomorphic foliations. For example, it may be useful to study the dynamics of ``generic'' foliations such as in \cite{gabriel}. Similarly it should be mentioned that B. Deroin and V. Kleptsyn have employed the foliated Brownian motion to study the transverse dynamics of a conformal Riemann surface lamination, \cite{bertrand}. Roughly speaking they show the evolution of ``most points'' under the Brownian motion tends to give rise to a ``contractive holonomy''. In this sense their work seems to be related to ours, i.e. the Brownian motion evolution seems to be related to the foliation $\calh$. It would be interesting to clarify possible relations between these approaches. Along these lines, the first sections of this paper may also serve as an introduction to this circle ideas. In addition, we have included a preliminary section providing details on some well-known facts that are usually not detailed in the literature. In particular the construction of \cite{blm} is reviewed. We also explain in detail the role played by the condition of having an ambient surface that is algebraic as well as the relation between Dirac masses for transversely invariant measures and compact leaves for singular foliations. Hopefully this discussion will be useful for readers that are not experts in foliation theory. Let us now state \vspace{0.1cm} \noindent {\bf Theorem A} (Main Theorem): {\sl Let $\fol$ be a singular holomorphic foliation on an algebraic surface $M$. Suppose that $\fol$ carries an invariant closed positive current $T$ and let $\calk$ denote a (singular) minimal set for $\fol$ contained in the the support of $T$. Suppose also that in $\calk$ there is one trajectory of $\calh$ having infinite length. Then $\calk$ consists of an algebraic curve left invariant by $\fol$.} \vspace{0.1cm} Let us point out that the above theorem {\it does not}\, state that the current $T$ coincides with the integration current over the mentioned algebraic curve (at least on a neighborhood of $\calk$). A partial answer to this question is however supplied by Theorem~B below. The reader can check Section~7 for the definition of Liouvillean integrability. \vspace{0.1cm} \noindent {\bf Theorem B} (Complement to Main Theorem): {\sl With the notations of Theorem~A, suppose that $T$ is not locally given by integration over $\calk$. Then $\fol$ admits a Liouvillean first integral on a neighborhood of $\calk$.} \vspace{0.1cm} A by-product of the method developed here is the existence of a ``contractive'' element in its holonomy pseudogroup provided that $\calh$ has a trajectory of infinite length on $M$. In principle, this contractive element may be either a local hyperbolic diffeomorphism or a ``ramified'' super-attractive contraction (cf. Section~$6$ for further details). Also a type of ``exponential super-contraction'' may appear in connection with saddle-nodes. This is proven here under the additional assumption that the singularities of $\fol$ yield no saddle-node singularities under the reduction procedure of Seidenberg (cf. Section~2, singularities verifying this condition are sometimes called ``generalized curves''). The presence of saddle-node in the picture would actually not affect neither our methods nor the validity of the conclusion. Nonetheless the elimination of this ``superfluous'' assumption would require us to discuss the behavior of the mentioned trajectories around a saddle-node and, in turn, this would lead us to a long detour in our way to the goals of the present paper. Yet, the study of these trajectories near saddle-node singularities is an essential part of the analysis involved in the continuation of this work, so it seems natural to let the general result about ``existence of contractions'' to be complemented there. We note however that, under the above assumption concerning saddle-nodes singularities, the methods used here ensure the existence of contraction given either by a local hyperbolic diffeomorphism or by a ``ramified'' super-attractive map, cf. Section~6. To close this Introduction, let us briefly outline the contents of this article. Section~2 contains background material on the subject. In Section~2.1 we recall Seidenberg's reduction procedure for the singularities of holomorphic foliations. Section~2.2 contains precise definitions and a few basic facts regarding closed currents invariant by a foliation including its relation with the notion of transverse invariant measure. Finally, in Section~2.3, the main ideas of \cite{blm} are presented. In particular the foliation $\calh$ (associated to a given holomorphic foliation $\fol$) is defined. Section~3 is devoted to a detailed study of the behavior of $\calh$ on a neighborhood of a singularity of $\fol$ belonging to the Siegel domain. This includes a discussion of the ``Dulac transform'' defined by means of $\calh$. Building on the material presented in Section~3, we develop in Section~4 a detailed analysis of the singularities of $\fol$ lying in the support of $T$. The structure of these singularities will play an important role in the proof of Theorem~\ref{fim1} which is a slightly more general version of Theorem~A. Section~5 begins with a few global definitions, in particular the precise definition of ``trajectory of $\calh$'' and of its corresponding length. This section is technically very simple and its main result is summarized by Propostion~\ref{4prop1}. Finally in Sections~6 and~7, we shall use the preceding material to deduce the proofs of the above stated theorems. \vspace{0.2cm} \noindent {\bf Ackowledgements}: It is my pleasant duty to thank Emmanuel Paul for explaining me his work \cite{paul} which allowed me to prove Theorem~B. I am also grateful to B. Deroin and to A. Glutsyuk for the interest they have shown in this work. \section{Preliminaries} \subsection{Generalities about foliations on algebraic surfaces} Let $M$ be a smooth compact complex surface. A {\it singular holomorphic foliation} $\fol$ on $M$ consists of the following data: \begin{enumerate} \item An open covering $\{ U_i \}$ of $M$. \item A holomorphic vector field $Y_i$ with isolated singularities defined on $U_i$ for each $i$. \item For each pair $i, j$ such that $U_i \cap U_j \neq \emptyset$ a function $g_{ij} \in {\mathcal O}^{\ast} (U_i \cap U_j)$ such that $Y_i = g_{ij} Y_j$. \end{enumerate} In particular it follows that the singularities of $\fol$ correspond to those of the $Y_i$'s and are therefore isolated. The foliation $\fol$ can alternatively be defined through holomorphic forms $\omega_i$ (rather than vector fields $Y_i$) subjected to the relations $\omega_i = f_{ij} \omega_j$ with $f_{ij} \in {\mathcal O}^{\ast} (U_i \cap U_j)$. The transition functions $g_{ij}$ (resp. $f_{ij}$) satisfy the natural cocycle relations and hence give rise to a line bundle $T^{\ast} \fol$ (resp. $N \fol$) on $M$ which is called the cotangent bundle of $\fol$ (resp. normal bundle of $\fol$). Furthermore, by virtue of the relations $Y_i = g_{ij} Y_j$, the foliation $\fol$ can be interpreted as a global holomorphic section of $T^{\ast} \fol \otimes TM$ with discrete zero set and modulo multiplication by non-vanishing holomorphic functions. Thus we obtain. \begin{lema} \label{probablynotneeded} A singular holomorphic foliation on an algebraic surface $M$ is always given by a globally defined meromorphic form $\omega$ (resp. meromorphic vector field $Y$). Besides we can suppose without loss of generality that the meromorphic form $\omega$ is not closed. \end{lema} \noindent {\it Proof}. The fact that $M$ is algebraic guarantees that every line bundle over $M$ admits non-trivial meromorphic sections. In turn, this shows that $\fol$ is generated by a global {\it meromorphic} vector field (or differential $1$-form) in the obvious sense. Finally, if $\fol$ is given by a closed form $\omega$, then we just need to replace $\omega$ by $f\omega$ for a generic meromorphic function $f$. In fact, we have $$ d(f\omega ) = df \wedge \omega + f d\omega = df \wedge \omega \neq 0 $$ for a ``generic'' $f$. This proves the lemma.\qed \begin{obs} {\rm {\bf General Setting}: Throughout this work $M$ is supposed to be a complex surface equipped with a holomorphic foliation $\fol$ which is given by a non-closed meromorphic form $\omega$. The preceding lemma shows that this is always the case for $M$ projective algebraic. In the latter case, it would also be possible to choose a $1$-form $\omega$ satisfying further ``generic conditions''. Suitable generic properties would simplify some parts of our discussion but we decided not to use this. The main reasons for our choice, besides having a slightly more general result, lies in the fact that some ``generic properties of $\omega$'' does not allow us, for example, to avoid a non-trivial intersection of $(\omega)_0$ and $(\omega)_{\infty}$ at a regular point of $\fol$. To eliminate these intersection points a natural idea is to blow them up what, in turn, would bring us back to a situation where the corresponding transform of $\omega$ is no longer ``generic''. Thus this transform would need to be replaced by a generic $1$-form on the blown-up surface and the final construction would (even if successful) rely on constructions and arguments of algebraic geometry that appear to me as less elementary than the approach chosen here. Indeed I also think that the treatment of all the ``degenerate situations'' that may arise for an arbitrary $\omega$ ends up making the argument more ``concrete''.} \end{obs} The fact that $\fol$ is generated by global meromorphic differential forms will be exploited in Paragraph~2.3. For the time being, we are going to focus on local aspects such as the structure of the singularities of $\fol$. We can then suppose that $\fol$ is given on a neighborhood of $(0,0) \in \C^2$ by a holomorphic $1$-form $\eta = P dy + Q dx$ having an isolated singularity at the origin. Sometimes it is also useful to think of $\fol$ as being given by the vector field $Y = P \fracx - Q \fracy$. The {\it order} of $\fol$ at $(0,0) \in \C^2$ is by definition the order of the first non-zero jet of $\eta$ at $(0,0) \in \C^2$. This notion is well-defined since $\eta$ (or $Y$) has isolated singularities. Similarly we define the eigenvalues of $\fol$ at $(0,0)$ as the eigenvalues of the linear part of $Y$ at $(0,0)$. These eigenvalues are therefore defined up to a multiplicative constant so that only their quotient has an intrinsic meaning. Let $\lambda_1 ,\lambda_2$ be the eigenvalues of $\fol$ at $(0,0)$. We say that $(0,0)$ is a {\it hyperbolic} singularity if $\lambda_1 \lambda_2 \neq 0$ and $\lambda_1 /\lambda_2 \in \C \setminus \R$. If $\lambda_1 \lambda_2 \neq 0$ but $\lambda_1 /\lambda_2 \in \R_-$, then we say that $(0,0)$ is in the {\it Siegel domain}. The singularity is said to be a {\it saddle-node} if $\lambda_1 \neq 0$ and $\lambda_2 =0$. Singularities whose eigenvalues satisfy $\lambda_1 \lambda_2 \neq 0$ and $\lambda_1 /\lambda_2 \in \R_+$ need some specific attention. Let us begin by saying that a singularity (whose both eigenvalues are possibly zero) is {\it dicritical} if it admits infinitely many {\it separatrices} i.e. anaytic curves passing through the singularity and invariant under the foliation. Now consider a foliation whose eigenvalues $\lambda_1 ,\lambda_2$ at the origin verify $\lambda_1 \lambda_2 \neq 0$ and $\lambda_1 =n\lambda_2$ for $n \in \N$ (or $\lambda_2 =n \lambda_1$). This foliation is then conjugate to the foliation given either by the $1$-form $$ nx \, dy \; - \; y \, dx $$ or by the $1$-form $$ (nx + y^n) \, dy \; - \; y \, dx \, . $$ In the first case $(0,0)$ is dicritical. In the second case it is said to be a Poincar\'e-Dulac singularity (cf. \cite{arnold}). Next, if $\lambda_1 \lambda_2 \neq 0$, $\lambda_1 /\lambda_2 \in \R_+$ but $\lambda_1 /\lambda_2$ is not an integer nor the inverse of an integer, then Poincar\'e Theorem asserts that $\fol$ is linearizable (cf. \cite{arnold}). In other words, $\fol$ is conjugate to the foliation given by $$ \lambda_1 x \, dy \; - \; \lambda_2 y \, dx \; . $$ The preceding implies that a singularity with eigenvalues $\lambda_1\lambda_2 \neq 0$ is dicritical if and only if it is not a Poincar\'e-Dulac singularity and $\lambda_1 /\lambda_2 \in \Q_+$ and $(0,0)$. When $\lambda_1 /\lambda_2 \in \R_+ \setminus \Q$ the resulting singularity is going to be called an {\it irrational focus}. Next we need to recall Seidenberg's reduction of singularities theorem \cite{seiden}. Let $\pi : \wdc \rightarrow \C^2$ denote the blow-up of $\C^2$ at the origin. If $\fol$ is defined on a neighborhood $U$ of $(0,0) \in \C^2$, then $\pi^{\ast} \fol$ naturally defines a holomorphic foliation on $\pi^{-1} (U)$. The foliation $\pi^{\ast} \fol =\tilf_1$ is called the blow-up of $\fol$. Clearly this construction can be iterated: if $p$ is a singularity of $\tilf_1$, then $\tilf_1$ can be blown up at $p$ to provde a new foliation defined on an appropriate open surface. Seidenberg's theorem \cite{seiden} then claims the existence of a finite sequence of blow-ups $$ \fol=\fol_0 \stackrel{\pi_1}{\longleftarrow} \tilf_1 \stackrel{\pi_2}{\longleftarrow} \cdots \stackrel{\pi_n}{\longleftarrow} \tilf_n $$ such that the following holds: \begin{itemize} \item The irreducible components of the (total) exceptional divisor $(\pi_1 \circ \cdots \circ \pi_n)^{-1} (0)$ are smooth rational curves $D_1 ,\ldots ,D_n$ of strictly negative self-intersection. \item The singularities of $\tilf_n$ are {\it reduced}\, i.e. they are of one of the following types: hyperbolic, in the Siegel domain, saddle-node or an irrational focus. \end{itemize} \noindent It should be noted that the exceptional divisor $(\pi_1 \circ \cdots \circ \pi_n)^{-1} (0)$ need not be invariant by $\tilf_n$. In fact, it may contain irreducible components invariant under $\tilf_n$ along with irreducible components that are not invariant by $\tilf_n$. If $D_i$ is an irreducible component that is not invariant by $\tilf_n$, then the projection of the regular leaves of $\tilf_n$ transverse to $D_i$ produces infinitely many separatrices for the initial foliation $\fol =\fol_0$. In other words, $\fol$ is dicritical. Conversely if $\fol$ is dicritical then, in the above situation, there must exist at least one irreducible component of $(\pi_1 \circ \cdots \circ \pi_n)^{-1} (0)$ which is not invariant by $\tilf_n$. The components of $(\pi_1 \circ \cdots \circ \pi_n)^{-1} (0)$ that are not invariant by $\tilf_n$ are also said to be {\it dicritical}. \subsection{Closed currents and transverse invariant measures} Let us now recall some standard definitions and results concerning closed invariant currents and transverse invariant measures for singular foliations. Given $M$ as before, we denote by $D^p (M)$ the Fr\'echet space of $C^{\infty}$-differential forms on $M$ of degree $p$. The space of currents of {\it dimension}\, $p$ is, by definition, the topological dual $D'_p (M)$ of $D^p (M)$. The space of currents possesses a natural differential ``$d$'' (as well as operators $\partial, \; \overline{\partial}$) obtained by duality from the usual operators acting on differential forms. In particular, we can talk about closed/exact currents. Suppose now that $M$ is endowed with a (singular) foliation $\fol$. Consider a current $T$ of dimension~$2$ and denote by $\supT \subseteq M$ its support. The current $T$ is said to be {\it invariant}\, by $\fol$ if $T (\beta) =0$ for every $2$-form $\beta$ vanishing on $\fol$. An invariant current is sometimes also called a {\it foliated current}\, or a {\it current directed by $\fol$}. Because $M$ is a complex surface and $\fol$ is a holomorphic foliation, every current $T$ as above is of type~$(1,1)$. In fact, on a neighborhood of a regular point, we can choose coordinates $(x,y)$ in which $\fol$ is given by $dx =0$. Hence a $2$-form $\beta$ vanishing on $\fol$ must be given by $\alpha_x \wedge dx + \alpha_{\overline{x}} d\overline{x}$ where $\alpha_x, \, \alpha_{\overline{x}}$ are appropriate $1$-forms. Now the invariance of $T$ under $\fol$ becomes encoded in the normal form $$ T = T(x,y) dx \wedge d\overline{x} $$ where $T(x,y)$ is naturally identified with a distribution. This shows that $T$ is a $(1,1)$-current as claimed. For these currents, the notion of being {\it positive}\, becomes especially transparent: $T$ is said to be positive if the local coefficient $T (x,y)$ is identified with a positive measure. An equivalent condition consists of saying that for every smooth $(1,0)$-form $\alpha$ the wedge product $T \wedge i\alpha \wedge \overline{\alpha}$ is a positive measure on $M$. The most basic example of a foliation admitting a closed positive current is provided by a compact leaf of a foliation $\fol$. More precisely, let $C$ be a compact curve (smooth to simplify) which is invariant by $\fol$. Consider then the current of integration over $C$, namely the current $T$ given by $$ T (\beta) = \int_C \beta \, . $$ The fact that $C$ happens to be invariant by $\fol$ implies that $T$ is also invariant by $\fol$ in the sense mentioned above. Besides Stokes formula shows that $T$ is, indeed, a closed current. Since $T$ is clearly positive, we conclude that $T$ is a closed positive current invariant by $\fol$. Next we shall consider a more geometric point of view to study closed currents invariant by a foliation $\fol$ as above. This point of view relies on the notion of {\it transverse invariant measure}\, which is essentially due to J. Plante. Since these transverse invariant measures are naturally defined for {\it regular foliations}, we assume for the time being that ${\rm Sing} \, (\fol) =\emptyset$. Since $M$ is compact and ${\rm Sing} \, (\fol) =\emptyset$, we can consider a finite covering $\{ V_i \}$ of $M$ by foliated charts $h_i : V_i \rightarrow \mathbb{D} \times \Sigma_i$ of $\fol$, where $\mathbb{D}$ stands for the unit disc of $\C$. The fact that the $h_i$'s define a foliated atlas implies that the change of coordinates $h_j \circ h_i^{-1} (x,y)$ has the special form $(f_{ij} (x,y), \gamma_{ij} (y))$. \begin{defnc} \label{plante?} With the above notations, a transverse invariant measure for $\fol$ consists of a collection $\mu_i$ of (positive) finite measures over the transverse sections $\Sigma_i$ which are invariant by change of coordinates. In other words, for every pair $i,j$ and every Borel set $B \subset \Sigma_i$, one has $\mu_i (B) = \mu_j (\gamma_{ij} (B))$. \end{defnc} Transverse invariant measures naturally provide closed (positive) currents invariant by the foliation $\fol$ in question by means of the following construction. Let $\{ \phi_i \}$ be a partition of the unity subordinated to the finite covering $\{ V_i \}$. Given a $2$-form $\beta$, we define a current $T$ by setting \begin{equation} T (\beta) = \sum_i \int_{\Sigma_i} \left ( \int_{\rm Plaque} (\phi_i \beta) \right) d\mu_i \; , \label{sofoi} \end{equation} where the ``Plaque'' is naturally identified with the unit disc $\mathbb{D}$ through the coordinates $h_i$'s. It is easy to check that $T$ is a closed positive current. In fact, it is a continuous linear operator on the Fr\'echet space of smooth $2$-forms. The conditions of being closed, positive and invariant by $\fol$ can immediately be checked. Conversely a closed positive current invariant by $\fol$ can be ``desintegrated'' to yield a transverse invariant measure so that the two objects turn out to be equivalent as originally pointed out by D. Sullivan \cite{denis}. Let us now go back to the case of a holomorphic foliation $\fol$ with singularities. We then consider the open surface $M \setminus {\rm Sing} \, (\fol)$ along with a covering $\{ V_i \}$, $i\in \N$, by foliated charts $h_i : V_i \rightarrow \mathbb{D} \times \Sigma_i$ for the restriction of $\fol$ to $M \setminus {\rm Sing} \, (\fol)$. Here the covering $\{ V_i \}$ need not be finite. Again, if we are given a closed positive current $T$ invariant by $\fol$, the procedure of ``desintegration'' mentioned above can still be carried out word-by-word to yield a transverse invariant measure for the restriction of $\fol$ to $M \setminus {\rm Sing} \, (\fol)$ as in Definition~\ref{plante?}. Besides, from this transverse invariant measure we can recover the current $T$ by means of Formula~(\ref{sofoi}). Whereas the ``summation over $i$'' (the indices of foliated coordinates) may now be infinite, the series is naturally uniformly convergent so that it does define a current (i.e. a continuous operator) that actually coincides with $T$. Here it might be worth making a minor comment concerning the passage from an ``abstract'' transverse invariant measure for $\fol$ to a closed positive current invariant by $\fol$. This remark however will not be used anywhere in this work since we always start with a current already defined on $M$. Consider a transverse invariant measure for $\fol$ on the open set $M \setminus {\rm Sing} \, (\fol)$ as in Definition~\ref{plante?} and the corresponding operator on smooth $2$-forms given by~(\ref{sofoi}). Since the summation over~$i$ is possibly infinite, it is necessary to make sure that the operator in question is well-defined and continuous. This clearly amounts to bound the mentioned integral on a neighborhood of the singular points of $\fol$. Whereas I believe that this bound always exist in the context of holomorphic foliations on complex surfaces, this problem cannot {\it a priori}\, be reduced to an application of some Riemann extension or Hartogs theorem. The difficulty here being that we do not know {\it a priori}\, whether or not the corresponding integration of $2$-forms is well-defined on a punctured neighborhood of the singularity in question. We shall not elaborate on this discussion since, as mentioned, it is not necessary for our purposes. Let us finish this paragraph with a well-known lemma that will often be used in the course of this work. \begin{lema} \label{atomicmass} Let $T$ be a closed positive current invariant by $\fol$. Assume that a point $p \in M \setminus {\rm Sing}\, (\fol)$ has positive mass with respect to the transverse invariant measure for $\fol$ induced by $T$. Then the leaf $L_p$ of $\fol$ through this point is contained in a compact curve. \end{lema} \noindent {\it Proof}. To prove the statement, let $\overline{L}_p$ denote the closure of $L_p$. We just need to show that the set $\overline{L}_p \setminus L_p$ formed by the (proper) accumulation points of $L_p$ is contained in the singular set of $\fol$. Indeed, since ${\rm Sing}\, (\fol)$ has codimension~$2$, it follows from the classical theorem of Remmert-Stein that $\overline{L_p}$ is an analytic set. To check the claim, suppose for a contradiction that $q$ is a regular point of $\fol$ belonging to $\overline{L}_p \setminus L_p$. Consider a trivializing coordinate around $q$. Since $q \in \overline{L}_p \setminus L_p$, there exists a sequence of points $\{p_i \} \subset L_p$ such that $p_i \rightarrow q$. Besides for $i\neq j$, $p_i, p_j$ belong to different plaques of the mentioned foliated chart. Denoting by $\Sigma$ the corresponding local transversal, the measure on $\Sigma$ associated to each of these plaques is a positive constant. It then follows from Equation~\ref{sofoi} that the corresponding current has ``infinite mass'', i.e. the integrals in~(\ref{sofoi}) diverge for a suitable choice of $\beta$. The resulting contradiction establishes the lemma.\qed \subsection{Brief review of Bonatti-Langevin-Moussu} In this paragraph we shall expand on the method developed in \cite{blm} to producing hyperbolic holonomy for certain holomorphic foliations (cf. also \cite{ghys-bourbaki} and references therein). The study of the oriented foliation $\calh$ consisting of trajectories yielding ``contractive holonomy'' is the central object of this section. Consider a surface $M$ endowed with a holomorphic foliation $\fol$ as before. Let $\omega$ be a global non-closed meromorphic $1$-form defining $\fol$ on $M$. The existence of this form is guaranteed if $M$ is projective (cf. Section~2.1). Also denote by $(\omega )_0$ (resp. $(\omega)_{\infty}$) the divisor of zeros (resp. poles) of $\omega$. Note that, in most applications, the sets $(\omega )_0, (\omega )_{\infty}$ are viewed as ordinary algebraic curves rather than as divisors (i.e. no multiplicity is associated to their components). Next let $\omega_1$ be the $1$-form defined by \begin{equation} d\omega = \omega \wedge \omega_1 \, . \label{GV} \end{equation} To obtain a $1$-form $\omega_1$ satisfying the equation above it suffices to find a meromorphic vector field on $M$ such that $\omega (X) =1$. In fact, for this vector field $X$ we have $d\omega = \omega \wedge \mathcal{L}_X (\omega)$, where $\mathcal{L}_X$ stands for the Lie derivative. Note also that two $1$-forms satisfying the mentioned equation must differ by a multiple of $\omega$. In particular it follows that the values of $\omega_1$ on vectors tangent to $\fol$ are well-defined even though $\omega_1$ is not so. This ambiguity however can be avoided if $\omega_1$ is regarded as a {\it foliated $1$-form}\, (as opposed to an ``ordinary'' $1$-form). By a foliated $1$-form, it is meant a $1$-form that is defined only for vectors tangent to (regular) leaves of $\fol$. In other words, a foliated $1$-form is not a usual $1$-form on $M$ since at a generic point of $p$ this form is not defined for vectors in $T_pM$ that are transverse to the leaf of $\fol$ through $p$. Still another way of thinking of a foliated $1$-form consists of saying that it is a meromorphic section of the cotangent bundle of $\fol$, cf. Section~2.1. The preceding discussion can then be summarized by stating that Equation~(\ref{GV}) unequivocally defines a meromorphic foliated $1$-form on $M$. This foliated $1$-form will systematically be denoted by $\omega_1$. The foliated $1$-form $\omega_1$ can explicitly be computed. If $(x,y)$ are local coordinates about a regular point $p$ of $\fol$ in which $\omega = F(x,y) dy$ then $\omega_1$ is given by $$ -\frac{\partial F/ \partial x}{F} dx \, . $$ Clearly the above definition is compatible with foliated changes of coordinates so that it gives rise to a global (meromorphic) foliated $1$-form $\omega_1$ on $M$ or, equivalently, to a global meromorphic section of the cotangent bundle of $\fol$. This formula also shows that the form $\omega_1$ is holomorphic on a neighborhood of $p$ unless $p$ belongs to the union of $(\omega )_0$ and $(\omega )_{\infty}$. A more accurate statement concerning the holomorphic nature of $\omega_1$ is given below. \begin{lema} \label{newversionSection2.11} Let $p \in M$ be a regular point of $\fol$. Suppose that all the irreducible components of $(\omega )_0 \cup (\omega )_{\infty}$ passing through $p$ are invariant by $\fol$. Then $\omega_1$ is holomorphic at $p$. \end{lema} \noindent {\it Proof}. Consider foliated coordinates $(x,y)$ about $p$ so that $\omega$ becomes $F(x,y) dy$. We can assume that $p$ belongs to $(\omega )_0 \cup (\omega )_{\infty}$, otherwise $\omega_1$ is holomorphic as already seen. Nonetheless the assumption that all components of $(\omega )_0 \cup (\omega )_{\infty}$ passing through $p$ are invariant by $\fol$ implies that there can be only one component which coincides in the coordinates $(x,y)$ with the axis $\{ y=0\}$. In other words, we have $\omega = F(x,y) dy = y^k f(x,y) dy$ where $k \neq 0$ and for some holomorphic function $f$ satisfying $f(0,0) \neq 0$. Now a direct computation yields $$ \omega_1 = -\frac{\partial F /\partial x}{F} dx = -\frac{\partial f / \partial x}{f} dx \, . $$ The statement follows since $f(0,0) \neq 0$.\qed Conversely we have: \begin{lema} \label{newversionSection2.22} Let $C \subset M$ be an irreducible component of $(\omega )_0 \cup (\omega )_{\infty}$ that is not invariant by $\fol$. Then $\omega_1$ has poles of order~$1$ over $C$. \end{lema} \noindent {\it Proof}. Let $p \in C$ be a regular point for $\fol$ which does not belong to any irreducible component of $(\omega )_0 \cup (\omega )_{\infty}$ different from $C$ itself. It suffices to show that the divisor of poles of $\omega_1$ locally coincides with $C$ with multiplicity equal to~$1$. As before we can choose foliated coordinates $(x,y)$ about $p$ where $\omega = F(x,y) dy = x^k f(x,y) dy$, $k \neq 0$, for some holomorphic function $f$ satisfying $f(0,0) \neq 0$. Now $$ \omega_1 = -\frac{\partial F/ \partial x}{F} dx = - \frac{k}{x} -\frac{\partial f / \partial x}{f} dx \, . $$ The statement follows.\qed Consider now a regular leaf $L \subset M$ of $\fol$ where $\omega_1$ does not vanish identically. The restriction of $\omega_1$ to $L$ is a meromorphic $1$-form on the Riemann surface $L$ (we shall often say that it is an abelian form on $L$). Therefore it induces a pair of (real one-dimensional) oriented singular foliations on $L$, namely the foliations given by $\{ {\rm Im}\, (\omega_1 )=0 \}$ and $\{ {\rm Re}\, (\omega_1 )=0 \}$. These foliations will respectively be denoted by $\calh$ and $\calh^{\perp}$ and they are mutually orthogonal for the underlying conformal structure of $L$. The orientation of $\calh$ (resp. $\calh^{\perp}$) is determined by the increasing direction of ${\rm Re}\, (\omega_1)$ (resp. ${\rm Im}\, (\omega_1 )$). More generally, the conformal structure of $L$ also allows us to define the oriented foliation $\calh^{\theta}$ whose trajectories form an angle $\theta$ with those of $\calh$ (where $\theta$ belongs to $(-\pi/2, \pi/2)$). Finally by letting the leaf $L$ vary, the foliations $\calh, \calh^{\perp}$, or more generally $\calh^{\theta}$, can also be viewed as singular foliations defined on $M$. We shall return to this point when discussing the singularities of $\calh, \calh^{\perp}$. Now the discussion in Lemma~\ref{newversionSection2.22} yields the following lemma borrowed from \cite{blm}. \begin{lema} \label{blm1} Let $p \in M$ be a regular point of $\fol$ and denote by $L$ the leaf through $p$. Let $C \subset M$ be an irreducible component of $(\omega )_0 \cup (\omega )_{\infty}$ that is not invariant by $\fol$. Then we have: \noindent 1. $p \in C$ but $p$ does not belong to $(\omega)_0$. Then $p$ is a source for $\calh$. Precisely there is a (complex one-dimensional) local coordinate $\textsc{X}$ along $L$ where the restriction of $\omega_1$ to $L$ becomes $\omega_1 = md\textsc{X}/\textsc{X}$, $m \in \N^{\ast}$. In particular the leaves of $\calh$ are radial lines emanated from $p \in L$ (identified to $0 \in \C$). \noindent 2. $p \in C$ but $p$ does not belong to $(\omega)_{\infty}$. Then $p$ is a sink for $\calh$. Precisely there is a (complex one-dimensional) local coordinate $\textsc{X}$ along $L$ where the restriction of $\omega_1$ to $L$ becomes $\omega_1 = - md\textsc{X}/\textsc{X}$, $m \in \N^{\ast}$. In particular the leaves of $\calh$ are radial lines converging to $p \in L$ (identified to $0 \in \C$). \end{lema} \noindent {\it Proof}. Consider the first case. Since $p$ is regular, we have $\omega = d\textsc{Y}/f (\textsc{X}, \textsc{Y})$ where $f (0,0) =0$ for suitable coordinates $\textsc{X}, \textsc{Y}$. The fact that $f(0,0) =0$ follows from the assumption $p \in (\omega)_{\infty}$ and $p \not\in (\omega)_0$. Modulo performing a further change of coordinates, we can assume without loss of generality that $f (\textsc{X}, 0) = \textsc{X}^m$ for some $m \in \N^{\ast}$. Now the equation $d\omega = \omega \wedge \omega_1$ yields the desired form for $\omega_1$. The second case can analogously be treated.\qed Naturally an analogous discussion applies to the foliations $\calh$ (with $\theta \in (-\pi/2, \pi/2)$). Thus we already know that components of $(\omega )_0 \cup (\omega )_{\infty}$ that are not invariant by $\fol$ give rise to singularities of $\calh, \, \calh^{\perp}$ at regular points of $\fol$. Next the foliated form $\omega_1$ also have a divisor of zeros (resp. poles) denoted by $(\omega_1)_0$ (resp. $(\omega_1)_{\infty}$). It follows from the combination of Lemma~\ref{newversionSection2.11} and Lemma~\ref{newversionSection2.22} that $(\omega_1)_{\infty} \subset (\omega)_0 \cup (\omega)_{\infty}$. Besides no irreducible component of $(\omega_1)_{\infty}$ can be invariant by $\fol$. Let us now consider a component $C$ of $(\omega_1)_0$ that is not invariant by $\fol$. The following lemma is also borrowed from \cite{blm}. \begin{lema} \label{blm2} Let $p$ be a regular point of $\fol$ which does not belong to $(\omega)_0 \cup (\omega)_{\infty}$. Suppose that $p$ lies in a component $C$ of $(\omega_1)_0$ that is not invariant by $\fol$ and denote by $L$ the leaf of $\fol$ containing $p$. Then the behavior of $\calh$ at $p$ is that of a saddle with $2m$ separatrices. Precisely, in suitable coordinates $\textsc{X}$ along $L$, the restriction of $\omega_1$ to $L$ becomes $\omega_1 = m \textsc{X}^{m-1}\, d\textsc{X}$ for $m \geq 2$. \end{lema} \noindent {\it Proof}\,: Since $p$ is regular and $p \not\in (\omega)_0 \cup (\omega)_{\infty}$, there are local coordinates $\textsc{X}, \textsc{Y}$ around $p$ in which $\omega = f (\textsc{X}, \textsc{Y}) d\textsc{Y}$ with $f$ holomorphic. Suppose first that $f(\textsc{X}, 0)$ is not trivial. Then the restriction of $\omega_1$ to $\{ \textsc{Y} =0\}$ is given by $-(\partial f /\partial \textsc{X}) d\textsc{X}/f$ where the functions are evaluated at $(\textsc{X} ,0)$. The result then follows. On the other hand, if $\omega$ vanishes identically on $\{ \textsc{Y} =0\}$ (or has poles over this leaf) then $\omega =y^k f \textsc{Y}$ with $f$ as before. This still gives $\omega_1 = -(\partial f /\partial \textsc{X}) d\textsc{X}/f$ so that the statement follows.\qed \begin{obs} \label{2obs2} {\rm Note that $m \textsc{X}^{m-1} d\textsc{X}$ is nothing but the lift of the regular form $d \textsc{X}$ through the ramified covering $\textsc{X} \mapsto \textsc{X}^m$. In particular $m \textsc{X}^{m-1} d\textsc{X}$ has $2m$ separatrices (namely the lifts of the separatrices $\R_+$ and $\R_-$ of $d\textsc{X}$) with alternate orientation.} \end{obs} Let us split the divisor of zeros $(\omega)_0$ of $\omega$ into two divisors $(\omega)_0^{\fol}$ and $(\omega)_0^{\perp \fol}$ as follows: an irreducible component $C$ of $(\omega)_0$ belongs to $(\omega)_0^{\fol}$ if and only if it is invariant by $\fol$. Otherwise it belongs to $(\omega)_0^{\perp \fol}$ (the multiplicity of each component remaining unchanged). Similarly we define the split of $(\omega)_{\infty}$ into $(\omega)_{\infty}^{\fol}$ and $(\omega)_{\infty}^{\perp \fol}$. Let us now summarize the information so far obtained about possible singular points of $\calh, \, \calh^{\perp}$ (and of $\calh^{\theta}$). Singular points for these foliation belong to the list below. \begin{enumerate} \item Singular points of $\fol$ (to be detailed later). \item Irreducible components of $(\omega)_0^{\perp \fol}$. These points are sink singularities for $\calh$. \item Irreducible components of $(\omega)_{\infty}^{\perp \fol}$. These points are source singularities for $\calh$. \vspace{0.1cm} Note that the foliated $1$-form $\omega_1$ is holomorphic away from ${\rm Sing}\, (\fol) \cup (\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$. In particular the support of the pole divisor of $\omega_1$ are the union of the components of $(\omega)_0^{\perp \fol}$ and $(\omega)_{\infty}^{\perp \fol}$. Splitting also the zero divisor $(\omega_1)_0$ of $\omega_1$ into divisors $(\omega_1)_0^{\fol}$ and $(\omega_1)_0^{\perp \fol}$, consisting respectively of components that are invariant by $\fol$ and components that are not invariant by $\fol$, we identify the locus of the possible additional singularities of $\calh$ (resp. $\calh^{\perp}, \, \calh^{\theta}$), namely: \vspace{0.1cm} \item Irreducible components of $(\omega_1)_{0}^{\perp \fol}$. These components yield saddle singularities for $\calh$ (resp. $\calh^{\perp}, \, \calh^{\theta}$). \item Irreducible components of $(\omega_1)_{0}^{\fol}$. Over these compact leaves of $\fol$ the foliations $\calh, \, \calh^{\perp}$ and $\calh^{\theta}$ are not defined. \end{enumerate} We are now able to give the geometric meaning of the foliated form $\omega_1$. Consider a path $c$ contained in a leaf $L$ of $\fol$ along with local transverse sections $\Sigma_0, \, \Sigma_1$ respectively through $c(0), \, c(1)$. The parallel transport over the leaves of $\fol$ gives rise to a local diffeomorphism ${\rm Hol}\, (c)$ from $\Sigma_0$ to $\Sigma_1$ taking $c(0)$ to $c(1)$ called the holonomy map of $\fol$ over $c$. It is well-known that the derivative of ${\rm Hol}\, (c)$ is not intrinsically defined unless $c$ is a loop. These derivatives however can be considered for a fixed choice of parametrizations for the transverse sections $\Sigma_0, \, \Sigma_1$ (and for fixed parametrizations they can be considered whether or not $c$ is a loop). To be more precise, suppose that $\Sigma_0, \, \Sigma_1$ are parameterized by $\omega$, i.e. consider local coordinates $\varphi_i : \Sigma_i \rightarrow \C$, $i=0,1$, defined by $$ \varphi_0 (p) = \int_{c(0)}^p \omega \; \; \, {\rm and} \; \; \, \varphi_1 (q) = \int_{c(1)}^q \omega $$ where the integrals are well-defined modulo choosing $\Sigma_0, \, \Sigma_1$ simply connected. In these coordinates, the holonomy map ${\rm Hol}\, (c)$ can be identified to a local diffeomorphism of $(\C, 0)$. This local diffeomorphism satisfies \begin{equation} ({\rm Hol}\, (c))' (c(0)) = \exp \left ( -\int_c \omega_1 \right) \, . \label{PLemma} \end{equation} This formula is sometimes referred to as Poincar\'e Lemma. Several comments are needed here to fully explain its meaning. First let us fix a (finite) covering of a compact part $K$ of $M \setminus {\rm Sing}\, (\fol)$ by foliated coordinates $\varphi_i : U_i \rightarrow \C^2$ where each $U_i$ is equipped with a local transverse section $\Sigma_i$ parametrized by $\omega$ as above. Setting $\varphi_i (U_i) = D \times {\bf T}_i$ where $D$ stands for the unit disc and where ${\bf T}_i$ is identified with $\Sigma_i$ parameterized as indicated, the Poincar\'e Lemma becomes applicable to every path $c \subset K$ contained in a leaf of $\fol$ (modulo an obvious decomposition of $c$ into paths contained in the open sets $U_i$). Some further remarks are needed: \begin{itemize} \item The neighborhoods $U_i$ are chosen so that $(\omega)_0^{\perp \fol} \cap \varphi_i^{-1} (\partial D \times {\bf T}_i) = \emptyset$. Similarly $(\omega)_{\infty}^{\perp \fol} \cap \varphi_i^{-1} (\partial D \times {\bf T}_i) = \emptyset$ and $(\omega_1)_0^{\perp \fol} \cap \varphi_i^{-1} (\partial D \times {\bf T}_i) = \emptyset$. \item If $C$ is a component of $(\omega)_0^{\fol}$ (in particular $C$ is invariant by $\fol$), then the parametrization of $\Sigma_i$ is actually ramified at the ``origin'' (it is a local ramified covering rather than a local diffeomorphism). An analogous conclusion (on a neighborhood of infinity) applies to components of $(\omega)_{\infty}^{\fol}$. \item If $K' \subset K$ is a compact part of $M \setminus ({\rm Sing}\, (\fol) \cup (\omega)_0^{\fol} \cup (\omega)_{\infty}^{\fol})$ then the parametrization of $\Sigma_i$ restricted to $K'$ is ``equivalent'' to the parametrization induced by an auxiliary Hermitian metric on $M$ (here it is to be noted that the first item above ensures that no $\Sigma_i$ intersects $(\omega)_0^{\perp \fol}$, $(\omega)_{\infty}^{\perp \fol}$). In fact, every Hermitian metric on $M$ induces parametrization that are pairwise ``equivalent'' in the sense that ``corresponding lengths'' are mutually controlled, from below and by above, by multiplicative constants. \item If $c \subset K'$ is a path contained in a trajectory $l$ of $\calh$, $l \subset L$ where $L$ is a leaf of $\fol$, then the holonomy ${\rm Hol}\, (c)$ is such that $({\rm Hol}\, (c))' (c(0))$ is strictly smaller than~$1$ (with respect to above fixed foliated coordinates). Indeed, by construction, the integral of $\omega_1$ over $c$ increases monotonically with the length of $c$, cf. Formula~(\ref{PLemma}). \end{itemize} Throughout the paper, we shall assume that $\fol$ is not a pencil, that is, not all the leaves of $\fol$ are compact. According to Jouanolou \cite{joa} this actually means that $\fol$ leaves only finitely many algebraic curves invariant. In particular the support of $(\omega)_0^{\fol} \cup (\omega)_{\infty}^{\fol} \cup (\omega_1)_0^{\fol}$ consists of finitely many algebraic curves (if not empty). We are now ready to explain the fundamental observation of \cite{blm}. Let $K'$ be as above and consider a path $c \subset L \cap K'$ parametrizing a trajectory of $\calh$ (i.e. $\omega_1 (c(t)). c'(t)$ is always a nonnegative real number). In particular the holonomy map ${\rm Hol}\, (c)$ (measured with respect to the identifications fixed above) is such that $({\rm Hol}\, (c))' (c(0))$ decays exponentially with the {\it length of $c$}. The notion of {\it length of $c$} can be identified with the length measured in $D$ for each coordinate $\varphi_i$. Alternatively this length can be measured with respect to the fixed auxiliary Hermitian metric on $M$ (the two notions of lengths being equivalent up to multiplicative constants, i.e. the metrics induced on $L$ are quasi-isometric). Also, because $c$ is contained in $K'$, the notions of distance in the transverse sections $\Sigma_i$ induced by the parametrization through $\omega$ and by the mentioned Hermitian metric are mutually controlled by multiplicative constants. Let then $c$ be defined on the interval $[0,t_0] \subset \R$. The corresponding holonomy map ${\rm Hol}\, (c)$ is then defined on a small disc $D_0 (r) \subset \Sigma_{i_0}$ (for some $i_0$). Naturally ${\rm Hol}\, (c)$ maps $D_0 (r)$ diffeomorphically onto a neighborhood of $c (t_0) \in \Sigma_{i_1}$ (for some $i_1$). It is observed in \cite{blm} that the contractive character of the holonomy along the oriented leaves of $\calh$ allows one to have a uniform bound on the radius of $D_0 (r)$ regardless of the point $c (t_0)$ and of the length of the $c$. Denoting by $\calh_{\vert K'}$ the restriction of $\calh$ to $K'$ one has: \begin{teo} \label{blm} {\rm ({\bf [B-L-M]})} There is a uniform $r > 0$ with the following properties: \noindent 1. Let $l_p$ be an oriented trajectory of $\calh_{\vert K'}$ passing through $p \in K'$. If $c: [0 , t_0] \rightarrow l_p \subset L_p \subset K$ is a parametrization of (a segment of) $l_p$ ($p = c(0)$), then the corresponding holonomy map ${\rm Hol}\, (c)$ is defined on $D_p (r) \subseteq \Sigma_{i_0}$ (for some $i_0$). Besides ${\rm Hol}\, (c)$ maps $D_p (r)$ diffeomorphically onto its image in $\Sigma_{i_1}$ (for some $i_1$). \noindent 2. Assume, in addition, that the distance of $l_p$ to the divisor $(\omega_1)_0^{\perp \fol}$ is bounded from below by a positive constant $\delta$. Then there are uniform constant $C >0$, $k >0$ ($k$ depending solely on $\delta$) such that \begin{equation} \vert ({\rm Hol}\, (c)) (q) \vert \leq C \exp \, (-k \, {\rm length}\, (c) /2) \; , \label{Contraction1} \end{equation} for every $q \in D_p (r)$ and where ${\rm length}\, (c)$ stands for the length of the path $c$.\qed \end{teo} In item~2 above, it is to be noted that the asymptotic exponential decay of the diameter of the set $({\rm Hol}\, (c)) (D_p(r))$ has an intrinsic meaning since the length of $c$ (as well as the the notion of distance in the transverse sections $\Sigma_i$ restricted to $K'$) vary in a way controlled by multiplicative constants as pointed out above. In particular if these metrics are changed, Formula~(\ref{Contraction1}) remains valid modulo changing the values of the constants $C, k$. \begin{obs} \label{2obs3} {\rm The reader will check that the same statement remains true for the foliations $\calh^{\theta}$ for a fixed $\theta$ in $(-\pi/2, \pi/2)$. All these statements will be revisited and sharpened later in this paper.} \end{obs} \section{The structure of $\calh$ around a singularity in the Siegel domain} The local structure of $\calh$ around a regular point of $\fol$ was described in the preceding section. The next step is to discuss the analogous problem on a neighborhood of a singularity $p$ of $\fol$ which belongs to the Siegel domain. About this singularity there are coordinates $(u,v)$ ($p\simeq (0,0)$) in which $\omega$ becomes \begin{equation} \omega = h(u,v) [\lambda_1 u (1+r^1(u,v)) \, dv \; + \; \lambda_2 v(1+r^2(u,v)) \, du] \label{siegel1} \end{equation} where $h$ is meromorphic and $r^1 ,r^2$ are holomorphic functions verifying $r^1 (0,0) =r^2(0,0) =0$. Finally one also has $\lambda_1 \lambda_2 \neq 0$ and $\lambda_1/\lambda_2 \in \R_+$ (as to the sign conventions, note that we are now using differential forms, rather than vector fields, to represent a singularity in the Siegel domain). In particular $\fol$ possesses exactly $2$ separatrices at $p \simeq (0,0)$ namely, those given by $\{ u=0\}$ and $\{ v=0\}$. Throughout this section we work under the following extra-assumption: \noindent {\it Local invariance condition}: One has $h(u,v) =u^a v^b$ for some $a,b \in \Z$. The contents of the local invariance condition is that, on a small neighborhood of $p$, the curves $(\omega)_0$ and $(\omega)_{\infty}$ are invariant by $\fol$. As it will be shown later, this assumption does not affect the generality of our arguments since it can always be obtained by performing finitely many blow-ups. Because of the local invariance condition, the $1$-form $\omega$ can be written in the coordinates $u,v$ as \begin{equation} \omega = u^a v^b [\lambda_1u (1+r^1(u,v)) dv + \lambda_2 v (1+r^2 (u,v)) du ] \, . \label{siegel2} \end{equation} As seen in Section~2.3, the foliated form $\omega_1$ can be obtained by restriction to the leaves of $\fol$ of an actual (locally defined) $1$-form $\Omega_1$ satisfying $d\omega = \omega \wedge \Omega_1$. Setting $\Omega_1 = fdv + gdu$, it follows that \begin{equation} f\lambda_2 v (1+r^2) - g \lambda_1 u (1+r^1) = \lambda_1 (1+a)(1+r^1) + \lambda_1 ur^1_u - \lambda_2(1+b) (1+r^2) -\lambda_2 vr^2_v \label{siegel3} \end{equation} where $r^1_u$ (resp. $r^2_v$) stands for the partial derivative of $r^1$ (resp. $r^2$) with respect to $u$ (resp. $v$). Let us first consider the behavior of $\calh$ on the separatrices $\{ u=0\}$ and $\{ v=0\}$. \begin{lema} \label{3lema1} Suppose that $\lambda_1(1+a) - \lambda_2 (1+b) \neq 0$. Then the behavior of $\calh$ over one of the separatrices is that of a sink (as in Lemma~\ref{blm1}). Besides, on the other separatrix, $\calh$ behaves like a source. \end{lema} \noindent {\it Proof}\,: The restriction of $\Omega_1$ to $\{ v=0\}$ is the Abelian form $g(u,0) du$. By letting $v=0$ in Equation~(\ref{siegel3}), we obtain $$ -g(u,0)\lambda_1 u (1+r^1(u,0)) = \lambda_1 (1+a)(1+r^1(u,0)) + \lambda_1 ur^1_u (u,0) -\lambda_2(1+b)(1+r^2(u,0)) \, . $$ Hence \begin{equation} g(u,0) = - \frac{\lambda_1(1+a) -\lambda_2(1+b)}{\lambda_1 u} + \widetilde{s}_g (u) \label{siegel4} \end{equation} where $\widetilde{s}_g (u)$ is holomorphic around $0 \in \C$. Similarly, on $\{ u=0\}$, $\Omega_1$ becomes $f(0,v) dv$ and Equation~(\ref{siegel3}) yields \begin{equation} f(0,v) = \frac{\lambda_1(1+a) -\lambda_2(1+b)}{\lambda_2 v} + \widetilde{s}_f (v) \label{siegel5} \end{equation} where $\widetilde{s}_f (v)$ is holomorphic around $0 \in \C$. Since $\lambda_1/\lambda_2 \in \R_+$, the statement follows from comparing Equations~(\ref{siegel4}) and~(\ref{siegel5}) and recalling that the restriction of $\Omega_1$ to the leaves of $\fol$ coincides with $\omega_1$.\qed \begin{obs} \label{restrictionholomorphic} {\rm In the case where $\lambda_1(1+a) - \lambda_2 (1+b) = 0$ the calculation above shows that $\omega_1$ is holomorphic over both separatrices of $\fol$ at $p$.} \end{obs} As a matter of fact we also need to control $\omega_1$ (or equivalently to understand the trajectories of $\calh$) on the leaves of $\fol$ distinct from the separatrices. To abridge notations, in the sequel $\fol$ is going to be considered as a foliation defined on a neighborhood $U$ of $(0,0) \in \C^2$. The corresponding arguments should also be understood modulo reducing this neighborhood. Recall that $\fol$ is defined on $U$ by the $1$-form $$ \eta = \lambda_1 u (1+r^1) dv + \lambda_2 v (1+r^2) du $$ so that $\omega = u^a v^b \eta$. For the rest of this section we always suppose that $\lambda_1(1+a) - \lambda_2 (1+b) \neq 0$. \begin{lema} \label{3lema2} The form $\omega_1$ has no zeros on $U$. In fact, there is a positive constant $C >0$ such that for every $p \in L \subset U$ and unit vector $\mathtt{v} \in T_pL$ one has $$ \Vert \omega_1 (p) . \mathtt{v} \Vert \geq C >0 \, . $$ \end{lema} \noindent {\it Proof}\,: Given $\epsilon_1 ,\epsilon_2 >0$ sufficiently small, let us denote by $\Sigma$ the local transverse section defined by $$ \Sigma = \{ (u,v) \in \C^2 \; ; \; u =\epsilon_1 \; {\rm and} \; \vert v \vert < \epsilon_2 \} \; . $$ Denoting by $\Sigma_{\fol}$ the saturated of $\Sigma$ by $\fol$, it is proved in \cite{mamo} (cf. also \cite{mat}, \cite{reis}) that $\Sigma_{\fol} \cup \{ u=0 \}$ contains a neighborhood of $(0,0) \in \C^2$. Since we need a parametrization of the leaf $L$ in order to estimate the restriction of $\omega_1$ to $L$, let us consider the set $D_1^- = \{ (u,v) \in \C^2 \; ; \; \vert u \vert < \epsilon_1 \; {\rm and} \; v \not\in \R_- \} $. We then define $W = \{ (u,v) \in \C^2 \; ; \; u \in D_1^- \; {\rm and} \; \vert v \vert \leq \epsilon_2 \}$. Because $D_1^-$ is simply connected, the restriction of $\fol$ to $W$ does not present the local holonomy associated to the separatrix $\{ v=0 \}$. Thus fixed $y_0 \in \Sigma$, the leaf $L_0$ of $\fol$ restricted to $W$ through $y_0$ is the graph of a holomorphic function $h$. Precisely the argument of \cite{mamo} shows the existence of $h: D_{y_0} \subset D_1^- \rightarrow \C$ whose graph $\{ (u, h(u)) \}$, $u \in D_{y_0}$, coincides with $L_0$. Clearly to obtain estimates for the restriction of $\omega_1$ to the leaves of $\fol$, it suffices to estimate $\omega_1$ over leaves $L_0$ as above since $\R_-$ can be substituted by another semi-line in the definition of $D_1^-$. Now, when dealing with $L_0$, we are allowed to use the parametrization $u \mapsto (u ,h(u))$. Fix a point $q = (u_q ,h(u_q)) \in L_0$. The tangent space to $L_0$ at $q$ is spanned over $\C$ by the vector $(1, h'(u_q))$ whose norm is not uniformly bounded on $U$. In any case $\omega_1 (q)$ evaluated over $(1, h'(u_q))$ coincides with the evaluation of $\Omega_1$ over the same vector. Thus we obtain \begin{equation} \omega_1 (q) . [1 , h' (u_q)] = \Omega_1 (q) . [1, h'(u_q)] = f . h' (u_q) + g \; . \label{siegel6} \end{equation} On the other hand, $\omega (q) . [1 , h' (u_q)] = 0$ so that Formula~(\ref{siegel3}) provides \begin{equation} h' (u_q) [1+r^1 (u_q , h(u_q))] = \frac{\lambda_2 h(u_q)}{\lambda_1 u_q} (1+ r^2 (u_q , h(u_q))) \; . \label{siegel7} \end{equation} Therefore \begin{eqnarray} f . h' (u_q) + g & \! = \! & - f\frac{\lambda_2 h(u_q)}{\lambda_1 u_q} \frac{1+r^2}{1+r^1} + g \label{siegelpr1}\\ & \! = \! & \frac{-1}{\lambda_1 u_q (1+r^1)} [\lambda_1 (1+a)(1+r^1) \!+\! \lambda_1 u_q r^1_u \! - \! \lambda_2(1+b) (1+r^2) \! - \! \lambda_2 h(u_q) r^2_v ] \label{siegelpr2} \end{eqnarray} where the functions $r^1,r^2,r^1_u,r^2_v$ are evaluated at $(u_q , h(u_q))$, cf. Formula~(\ref{siegel3}). Now recall that $\Vert u_q \Vert$ and $\Vert h (u_q) \Vert$ are bounded by $\epsilon_1 , \epsilon_2$. It follows from (\ref{siegel7}) that the norm of $(1 ,h' (u_q))$ is bounded by $\max \{ 1 , {\rm const}/\Vert u_q \Vert \}$ for a suitable constant const. The statement then results from the condition $\lambda_1(1+a) - \lambda_2 (1+b) \neq 0$.\qed We still need to describe the geometry of the leaves of $\calh$ on $L_0$. According to Lemma~(\ref{3lema1}), we can suppose without loss of generality that the oriented leaves of $\calh$ on $\{ v=0 \}$ converge to $0 \in \{ v=0\} \subset \C^2$ (i.e. $0 \simeq (0,0)$ is a sink for $\calh$ over $\{ v=0\}$). Similarly $0 \in \{ u=0\} \subset \C^2$ is a source for the leaves of $\calh$ contained in $\{ u=0 \}$. Next we consider the (real $3$-dimensional) set $$ A = \{ (u,v) \in \C^2 \; ; \; \vert u \vert = \epsilon_1 \; {\rm and} \; \vert v \vert < \epsilon_2 \} \, . $$ If $\epsilon_1 ,\epsilon_2$ are appropriately chosen and sufficiently small, the the oriented leaves of $\calh$ point inwards $A$, i.e. at a point $(u,v) \in A$ the leaf of $\calh$ through this point is oriented in the decreasing direction of the absolute value of $u$. Now let $l$ be an oriented leaf of the restriction of $\calh$ to a small neighborhood of $(0,0) \in \C^2$. As it will shortly be seen, $l$ is not closed. If $q_1 ,q_2 \in l$, we say that $q_2 > q_1$ provided that one can move from $q_1$ to $q_2$ in the sense of the orientation of $l$. We also denote by ${\rm dist}\, (q_1 ,q_2)$ the length of the segment of $l$ whose extremities are $q_1 ,q_2$. Finally we are ready to state the main result of this section. \begin{prop} \label{3prop1} There is a neighborhood $V$ of $(0,0) \in \C^2$ with the following properties: \noindent 1. Given $q_1 \in l \cap V$, there is $q_2 \in l \cap A$, with $q_1 > q_2$ and such that ${\rm dist}\, (q_1 ,q_2) < {\rm const}\, \epsilon_1$. \noindent 2. Given $q_1 \in l \cap V$, there is $\overline{q} = (\overline{q}^1, \overline{q}^2) \in l$, with $\overline{q} > q_1$ and ${\rm dist}\, (q_1 ,\overline{q}) < {\rm const}\, \epsilon_2$. Besides $\vert \overline{q}^2 \vert = \epsilon_2$ and $\overline{q}^1 \in \pi_1 (V)$ where $\pi_1 (V)$ stands for the projection of $V$ on the first coordinate. \end{prop} \noindent {\it Proof}\,: Let $B (\delta)$ be the bidisc $\{ (u,v) \in \C^2 \; ; \; \vert u \vert < \delta \; {\rm and} \; \vert v \vert < \delta \}$. We are going to show that $B (\delta)$ satisfies the conditions in our statement provided that $\delta$ is sufficiently small. Consider $a \in B (\delta)$ and suppose without loss of generality that the real part ${\rm Re}\, (q_1)$ of $q_1$ is positive. Let then $L$ (resp. $l$) be the leaf of the restriction of $\fol$ (resp. trajectory of the restriction of $\calh$) to $B (\delta )$ containing $q_1$. As already seen, $L$ is the graph of a holomorphic function $h : D_q \subset D_1^- \rightarrow \C$. In the parametrization $u \mapsto (u , h(u))$, the restriction of $\omega_1$ to $L$ becomes \begin{equation} fh' + g = \frac{\lambda_1 (1+a) - \lambda_2 (1+b) + \alpha}{\lambda_1 u} +s (u) \label{siegel8} \end{equation} where $s$ is holomorphic and $\alpha$ can be made arbitrarily small by reducing $\epsilon_1 ,\epsilon_2$. Indeed Formula~(\ref{siegel8}) is an immediate reformulation of Formula~(\ref{siegelpr2}). In particular, one has $\lambda_1 (1+a) -\lambda_2 (1+b) + \alpha \neq 0$. We now set $q_1 = (u_1 , h(u_1))$. Recalling that $D_q \subset \C$, we denote by $R_q$ the radial line emanated from $0 \in \C$ and passing through $u_1$. The intersection of $R_q$ with the circle $\vert u \vert =\epsilon_1$ is $\epsilon_1 u_1 /\vert u_1 \vert$. Similarly let $\pi_1 (l)$ be the oriented leaf of $\{ {\rm Im}\, (fh'+g) =0 \}$ containing $u_1$ which is nothing but the projection of $l$ on the first coordinate. \noindent {\it Claim}\,: There is a point $u_2 \in \pi_1 (l)$ such that $\vert u_2 \vert =\epsilon_1$. Besides there is a uniform constant $C$ such that $$ {\rm dist}\, \left( u_2 , \frac{\epsilon_1 u_1}{\vert u_1 \vert} \right) < C \epsilon_1^2 \, . $$ \noindent {\it Proof of the Claim}\,: It is an elementary fact about continuous/differentiable dependence of the initial conditions for solutions of real ordinary differential equations. The foliation associated to $\{ {\rm Im}\, [(\lambda_1 (1+a) - \lambda_2 (1+b))/ \lambda_1 u] =0 \}$ consists of radial lines through $0 \in \C$ so that the assertion is trivial in this case. Nonetheless the foliation in which we are interested is given by an Abelian form whose distance to $(\lambda_1 (1+a) - \lambda_2 (1+b))/ \lambda_1 u$ is less than $C \epsilon_1$ for an appropriate constant $C$. The statement promptly follows.\qed Combining the above claim with the fact that $\sigma_{\fol} \cup \{ u=0 \}$ contains a neighborhood of $(0,0) \in \C^2$, we conclude that $l$ intersects $A$ at a point $q_2$. Estimates in \cite{mamo} (see also \cite{mat} and \cite{reis}) guarantee that $q_2$ satisfies the conditions in the statement. Analogously one proves that the continuation of $l$ intersects the set $\vert v \vert =\epsilon_2$ at a point $\overline{q}$ with the desired properties. For further details on these estimates we refer the reader to the quoted papers.\qed \begin{coro} \label{hthetahperp} Under the preceding conditions the trajectories of $\calh^{\perp}$ contained in the local separatrices of $\fol$ are closed curves encircling the origin. For $\theta \in (-\pi/2 , \pi/2)$, the trajectories of $\calh^{\theta}$ contained in the local separatrix of $\fol$ where $\calh$ has a sink singularity (resp. a source singularity) are spiraling curves converging to the origin (resp. being emanated from the origin). Furthermore, on a local leaf of $\fol$ different from its separatrices, the behavior of $\calh^{\perp}$ is essentially determined by the local holonomy of the separatrices whereas the behavior of $\calh^{\theta}$, $\theta \in (-\pi/2 , \pi/2)$, is the combination of the above described Dulac transform (cf. below) with a finite power of the mentioned local holonomy map. \end{coro} \noindent {\it Proof}\,: It follows immediately from the fact that the oriented trajectories of $\calh$ (resp. $\calh^{\theta}$) form an angle of $\pi/2$ (resp. $\theta$) with the oriented trajectories of $\calh$.\qed Let us close this section with a discussion of the so-called {\it Dulac transform}\, associated to a singularity in the Siegel domain. Although this is a local discussion formally independent of the structure of $\calh$, it naturally involves definitions and results discussed above so that here seems to be a good place to carry it out. The material below will also be used in Sections~4 and~6. Whereas classical in nature, it is not easy to find a detailed exposition of this material in the literature. First we resume some notations. Recall that $\fol$ is defined on a neighborhood of $(0,0) \in \C^2$ by the vector field \begin{equation} Y = \lambda_1 u (1 +r^1) \frac{\partial}{\partial u} - \lambda_2 v (1 +r^2) \frac{\partial}{\partial v} \; . \label{vectorfieldY} \end{equation} Recall also that $A \subset \C^2$ was defined as $A = \{ (u,v) \in \C^2 \; ; \; \vert u \vert =\epsilon_1 \; \; {\rm and} \; \; \vert v \vert < \epsilon_2 \}$. Similarly we set $B = \{ (u,v) \in \C^2 \; ; \; \vert v \vert =\epsilon_2' \; \; {\rm and} \; \; \vert u \vert < \epsilon_1' \}$ for certain $\epsilon_1' , \epsilon_2' >0$. Fixed $u_0$ with $\vert u_0 \vert =\epsilon_1$ (resp. $v_1$ with $\vert v_1 \vert = \epsilon_2'$), we denote by $\Sigma_0^A$ (resp. $\Sigma_1^B$) the set $\{ (u,v) \in \C^2 \; ; \; u =u_0 \; \; {\rm and} \; \; \vert v \vert < \epsilon_2 \}$ (resp. $\{ (u,v) \in \C^2 \; ; \; \vert u \vert < \epsilon_1' \; \; {\rm and} \; \; v =v_1 \}$). In the sequel $\epsilon_1 ,\epsilon_1', \epsilon_2'$ are fixed and small whereas $\epsilon_2$ can be made smaller whenever necessary. For $u_0 ,v_1$ as above, let us denote by $\fol_0^A, \fol_1^B$ the saturated of $\Sigma_0^A , \Sigma_1^B$ by $\fol$. As already seen, both $\fol_0^A \cup \{ u=0\} \cup \{ v=0\}$ and $\fol_1^B \cup \{ u=0\} \cup \{ v=0\}$ contain an open neighborhood of $(0,0) \in \C^2$. Therefore, up to choosing $\epsilon_2$ very small, for every $(u_0 ,v_0) \in \Sigma_0^A$, there exist paths $c: [0,1] \rightarrow L_{(u_0 ,v_0)}$ such that $c(0) = (u_0 ,v_0)$ and $c(1) \in \Sigma_1^B$ (where $L_{(u_0 ,v_0)}$ stands for the leaf of $\fol$ through $(u_0 ,v_0)$). If $c, c'$ are two paths as above and satisfying $c(1) = (u ,v_1)$, $c'(1) = (u',v_1)$, then $u,u'$ belong to the same orbit of the local holonomy of the axis $\{ u=0 \}$. Now consider a simply connected domain $V_0 \subset \Sigma_0^A \setminus \{ (u_0 ,0) \}$. Suppose we are given a point $(u_0 ,v_0) \in V_0$ and a path $c_0 : [0,1] \rightarrow L_{(u_0 ,v_0)}$ as before. For $(u,v)$ sufficiently close to $(u_0 ,v_0)$, it is then possible to choose by continuity a path $c : [0,1] \rightarrow L_{(u_0 ,v_0)}$ such that $c(0) = (u_0 ,v)$ and $c(1) \in \Sigma_1^B$. Since $V_0$ is simply connected, we can extend this definition to the whole of $V_0$. In this way, we obtain a holomorphic map ${\rm Dul}: V_0 \subset \Sigma_0^A \setminus \{ (u_0 ,0) \}$ to $\Sigma_1^B$. This map is going to be called the {\it Dulac transform}\, (which depends on the previously chosen path $c_0$). Identifying $\Sigma_0^A$ with a neighborhood of $0 \in \C$, we shall refer to a sector of angle $\theta$ and radius $r$ meaning the intersection of the ball of radius~$r$ with a sector of angle $\theta$ (and vertex at $0 \in \C$). In practice, $V_0$ will always be a sector of angle less than $2\pi$ and sufficiently small radius. The choice of the initial path $c_0$ and of the semi-line in question entirely determines the corresponding map ${\rm Dul}$. The following lemma consists again of estimates that can be found for example in \cite{mamo}, \cite{mat} or in \cite{reis}. \begin{lema} \label{3lema3} Let $V_0 = \Sigma_0^A$ be a sector of angle less than $2\pi$ and sufficiently small radius. Fix a path $c$ and consider the resulting Dulac transform ${\rm Dul} : V_0 \subset \Sigma_0^A \rightarrow \Sigma_1^B$. Then the following estimate holds $$ \Vert {\rm Dul}\, (v) \Vert \leq {\rm Const}\, \Vert v \Vert^{\lambda_1 /\lambda_2} (1 + O (\Vert v \Vert)) \; . $$ \noindent \mbox{ }\qed \end{lema} In particular, if $\lambda_1 > \lambda_2$, the behavior of ${\rm Dul}$ is that of a (strong) contraction provided that $\Vert v \Vert$ is small. When $\lambda_1 < \lambda_2$ then ${\rm Dul}$ behaves as an expansion for $\Vert v \Vert$ small. Finally suppose that $\Sigma_0^A , \Sigma_1^B$ are endowed with measures $\mu_0 , \mu_1$ which are part of a system (of transverse sections and measures) defining a transverse invariant measure for a global realization of $\fol$ on some complex surface (in the sense of Section~2.2). Note that, in general, ${\rm Dul}$ is not one-to-one on $V_0 \subset \Sigma_0^A$ (if the angle of $V_0$ is not small) so that $\mu_0 (V_0) \neq \mu_1 ({\rm Dul}\, (V_0))$. Nonetheless we have: \begin{lema} \label{3lema4} With the preceding notations the following is verified. \noindent 1. Suppose that $\lambda_1 > \lambda_2$ and let $V_0$ be a sector of angle slightly less than $2\pi \lambda_2 /\lambda_1$. Then, for $\Vert v \Vert$ very small, ${\rm Dul}$ is one-to-one on $V_0$ and satisfies $\mu_0 (V_0) = \mu_1 ({\rm Dul}\, (V_0))$. \noindent 2. Suppose that $\lambda_1 < \lambda_2$ and let $V_0$ be a sector of angle slightly less than $2\pi$. Then, for $\Vert v \Vert$ very small, ${\rm Dul}$ is one-to-one on $V_0$ and satisfies $\mu_0 (V_0) = \mu_1 ({\rm Dul}\, (V_0))$. \end{lema} \noindent {\it Proof}\,: The proof consists of showing that for $v \in W$, we can obtain flow boxes containing the corresponding paths $c: [0,1] \rightarrow L_{(u_0 ,v)}$ so that the holonomy associated to these paths is well-defined and injective. This is clear when $\fol$ is linearizable. In the general case it results again from the asymptotic estimates already mentioned above.\qed \begin{obs} \label{holomorphiconaxes} {\rm {\bf The case when the restriction of $\omega_1$ to the local separatrices is holomorphic}: the reader has noted that the discussion of the behavior of $\calh$ (resp. $\calh^{\perp}$ and $\calh^{\theta}$) carried out in Proposition~\ref{3prop1} was based on the local invariance condition and on the assumption that $\lambda_1 (1+a) - \lambda_2 (1+b) \neq 0$. Now that we have already introduced the notion of Dulac transform associated to a Siegel singularity, let us also consider the case where $\lambda_1 (1+a) - \lambda_2 (1+b) = 0$ (assuming that the local invariance condition is still satisfied). As mentioned this case is such that the restriction of $\omega_1$ to the invariant axes $\{y=0\}$ and $\{x=0\}$ is holomorphic. Thus the restriction of $\omega_1$ to $\{y=0\}$ (resp. $\{x=0\}$) either is regular or vanishes at the origin. For the time being we shall assume that this restriction is not identically zero, though this is not strictly necessary for what follows (cf. Sections~4 and~5). Consider then the behavior of $\calh$ restricted to $\{y=0\}$ and suppose there is a trajectory $l$ of $\calh$ that passes ``very close'' to the origin. The first remark to be made here is that $l$ can be ``deformed'' to avoid a fixed neighborhood of the origin. These deformations are similar to deformations already performed when a singularity converges to a saddle-singularity of $\calh$ occurring at a regular point of $\fol$, cf. Section~2 and/or \cite{blm}. In particular they can be done without destroying the ``contractive behavior'' of the holonomy of $\fol$ associated to the trajectories of $\calh$. Therefore, if needed, a Siegel singularity satisfying the condition $\lambda_1 (1+a) - \lambda_2 (1+b) = 0$ can be avoided by the trajectories of $\calh$. In other words, the singularity becomes ``invisible'' and thus it can be ignored. However, even if these singularities can be avoided, we might want to take advantage of them by exploiting the (local) saddle-behavior of $\fol$. In other words, it may be useful to let a $\calh$-trajectory to approximate the singularity so as to be continued ``through the other separatrix of $\fol$'', i.e. the $\calh$ trajectory may go through the Dulac transform and then be continued in a different way. In this paper, if a trajectory of $\calh$ is about to entering some (previously fixed) neighborhood of a Siegel singularity as above, we shall consider all possible continuations of it, namely those that actually ``avoid the singularity'' and those that passes through the Dulac transform associated to the singularity itself. We shall return to these cases later in Sections~5 and~6.} \end{obs} \noindent {\bf An alternative point of view}: let us close this section by explaining an alternate way to see the above results on Dulac transforms and their connections with the material of Section~2.3. To begin with, let us make a simple remark concerning how the Dulac transform can be viewed in most of our applications. With the preceding notations suppose that the orientation of the trajectories of $\calh$ is such that the origin is a sink for the restriction of $\calh$ to $\{v=0\}$. Then $\epsilon_1, \epsilon_2$ can be chosen so that $\calh$ is transverse to $A\subset \C^2$. Besides every $\calh$-trajectory intersecting $A$ points inward $A$ and, unless this intersection occurs at a point belonging to $\{v=0\}$, it will eventually intersect $B$ with outward orientation. Thus we can define the Dulac transform as being the map from $A$ to $B$ defined by the trajectories of $\calh$. Note that this map is locally holomorphic away from $A \setminus \{v=0\}$. Besides, for $(u_0, v_0) \in A$, $v_0 \neq 0$, its image satisfy the estimates given in Lemma~\ref{3lema3}. Furthermore it is not hard to adapt the contents of Lemma~\ref{3lema4} to this setting. Naturally the preceding statements about the contractive or expansive character of the Dulac map can also be viewed in terms of Poincar\'e Lemma discussed in Section~2.3. For this it is however necessary to work with (possibly) ramified coordinates. Let us then consider a foliation $\fol$ defined on a neighborhood of $(0,0) \in \C^2$ by the vector field $Y$ in~(\ref{vectorfieldY}). More precisely suppose that the $1$-form $\omega$ defining $\fol$ is actually $\omega = \lambda_1 u (1 +r^1) \, dv + \lambda_2 v (1 +r^2) \, du$. Consider also sections $\Sigma_0^A$ and $\Sigma_1^B$ as above and suppose that the orientation of the trajectories of $\calh$ is such that they go from $\Sigma_0^A$ to $\Sigma_1^B$ (i.e. $\lambda_1 > \lambda_2$). To apply Formula~(\ref{PLemma}) to this case, we need to consider the parametrizations of $\Sigma_0^A, \, \Sigma_1^B$ that are obtained through the integral of $\omega$. It is then natural to set a coordinate $z_1$ on $\Sigma_0^A$ and a coordinate $z_2$ on $\Sigma_1^B$ such that $$ z_1 = \lambda_2 v (1 + {\rm h.o.t.}) \; \; \, {\rm and} \; \; \, z_2 = \lambda_1 u (1 + {\rm h.o.t.}) \, . $$ In these coordinates the derivative of the above introduced Dulac transform can be estimate by means of Formula~(\ref{PLemma}). This amounts to estimating the integral of $\omega_1$ over a segment of trajectory of $\calh$ going from $\Sigma_0^A$ to $\Sigma_1^B$. The latter estimate however is essentially equivalent to the calculations performed above. \section{Singularities of $\fol$ and invariant measures} Now we are going to begin the analysis of the global setting where $\fol$ is a singular holomorphic foliation defined on a complex surface $M$. Throughout this section $\fol$ is supposed to admit an invariant positive closed current $T$ whose associated transverse measure does not give mass to points ($T$ is said to be diffuse). Let $\supT \subseteq M$ be the {\it support}\, of $T$ which is obviously a compact set invariant by $\fol$. Modulo applying Seidenberg's theorem, we can suppose that all the singularities of $\fol$ are reduced. Our first aim in this section is to establish Proposition~(\ref{4.5prop1}) below. \begin{prop} \label{4.5prop1} Let $p \in {\rm Sing}\, (\fol)$ be a singularity of $\fol$ lying in $\supT$. Then $p$ is a singularity in the Siegel domain or it is an irrational focus. Furthermore if $p$ belongs to the Siegel domain and has eigenvalues with rational quotient, then $\fol$ is linearizable around $p$. \end{prop} \noindent Since $p \in {\rm Sing}\, (\fol) \cap \supT$ is reduced, the proof of Proposition~(\ref{4.5prop1}) essentially consists of showing that $p$ is neither a hyperbolic singularity nor a saddle-node. These are the contents of Lemmas~(\ref{4.5lema1}) and~(\ref{4.5lema2}) below. \begin{lema} \label{4.5lema1} If $p \in {\rm Sing}\, (\fol) \cap \supT$, then $p$ is not hyperbolic. \end{lema} \noindent {\it Proof}\,: Suppose for a contradiction that $p$ is hyperbolic. Then Poincar\'e Theorem ensures that $\fol$ is linearizable around $p$. In other words, there are local coordinates $u,v$ in which $\fol$ is given by $$ \eta = \lambda_1 u dv - \lambda_2 v du $$ with $\lambda_1 /\lambda_2 \in \C \setminus \R$. Consider a local transverse section $\Sigma$ passing through the point $(1,0)$. This section allows us to identify the local holonomy of the separatrix $\{ v=0\}$ with a local diffeomorphism $h$ fixing $0 \in \C$. The condition $\lambda_1 /\lambda_2 \in \C \setminus \R$ implies that $h$ is hyperbolic, i.e. $\vert h'(0) \vert <1$. Now consider a (local) leaf $L$ of $\fol$ contained in $\supT$ and intersecting $\Sigma$ at a point $(1, z_0)$. Denote by $\mu_{\Sigma}$ a representative of $T$, viewed as transverse invariant measure, over $\Sigma$ (cf. Section~2.2). If $z_0 \neq 0$, the orbit of $(1,z_0)$ under $h$ consists of infinitely many points converging towards $(1,0) \in \Sigma$. Furthermore, if $V \subset \Sigma$ is a sufficiently small neighborhood of $z_0 \simeq (1,z_0) \in \Sigma$, then the open sets $V ,h(V), h^2 (V), \ldots$ are pairwise disjoint. Nonetheless they have all the same $\mu_{\Sigma}$ measure for $h$ preserves $\mu_{\Sigma}$. In addition $\mu_{\Sigma} (V) >0$ since $L$ is contained in $\supT$. Together these facts imply that $\mu_{\Sigma} (\Sigma) =\infty$ what is impossible. We then conclude that $\supT$ is locally contained in the separatrices of $\fol$ at $p$. Therefore $\mu_{\Sigma}$ has an atomic component which is necessarily concentrated over an algebraic curve. Since this is impossible, the lemma follows.\qed Through a similar argument we are going to prove that $p \in {\rm Sing}\, (\fol) \cap \supT$ cannot be a saddle-node either. A very complete reference for saddle-node singularities is \cite{ramis}. The facts used below are however well-known. If $\fol$ is a saddle-node singularity then it can be written in Dulac Normal Form, i.e. in suitable local coordinates $u,v$, the foliation $\fol$ is given by the $1$-form $\eta$ satisfying $$ \eta = [u(1+ \Lambda v^p) + R(u,v)] dv - v^{p+1} du \; \; \; \; {\rm with} \; \; \; \; \Lambda \in \C \; \; \; \; {\rm and} \; \; \; \; p \geq 1 \, . $$ In particular $\{ v=0 \}$ is a separatrix of $\fol$ called the {\it strong invariant manifold of $\fol$}. Considering a local transverse $\Sigma$ as in Lemma~(\ref{4.5lema1}), we can identify the holonomy of the strong invariant manifold to a (local) diffeomorphism $h$ fixing $0 \in \C$. However, this time, $h$ has the form $h (z) = z + z^{p+1} + {\rm h.o.t.}$, where as usual ${\rm h.o.t}$ stands for terms of higher order. \begin{lema} \label{4.5lema2} If $p \in {\rm Sing}\, (\fol) \cap \supT$, then $p$ cannot be a saddle-node. \end{lema} \noindent {\it Proof}\,: Consider $\fol ,\Sigma$ and $\eta$ as above. Other than the strong invariant manifold, a saddle-node may or may not possess another separatrix (necessarily smooth and transverse to the former one) which is called the weak invariant manifold. In particular a saddle-node possesses at least one and at most two separatrices. We now suppose that $\supT$ is not locally contained in the union of the separatrices of $\fol$ since this would again lead us to a contradiction. It follows from \cite{ramis} that the union of the saturated $\fol_{\Sigma}$ of $\Sigma$ by $\fol$ with the weak invariant manifold (if it exists) contains a neighborhood of $p$. Thus there is a leaf $L \subset \supT$ of $\fol$ intersecting $\Sigma$ at a point $(1,z_0)$ with $z_0 \neq 0$. The topological description of the dynamics of $h (z) = z + z^{p+1} + {\rm h.o.t.}$ is well-known (cf. for example \cite{flower}) and it follows the existence of a small neighborhood $V \subset \Sigma$ of $z_0 \simeq (1,z_0)$ such that $V, h (V) ,h^2 (V) \ldots$ are pairwise disjoint. By taking a representative $\mu_{\Sigma}$ of $T$ on $\Sigma$ as in Lemma~(\ref{4.5lema1}) we conclude that $\mu_{\Sigma} (\Sigma) =\infty$. This is however impossible and establishes the lemma.\qed \vspace{0.1cm} \noindent {\it Proof of Proposition~(\ref{4.5prop1})}\,: After Lemmas~(\ref{4.5lema1}) and~(\ref{4.5lema2}), we only need to prove that a Siegel singularity with rational eigenvalues is linearizable. As already seen $\fol$ is locally given by $$ \eta = \lambda_1 u (1 + {\rm h.o.t.}) \, dv + \lambda_2 v (1 + {\rm h.o.t.}) \, du $$ with $\lambda_1 /\lambda_2 \in \Q_+$. Denoting by $\Sigma$ a transverse section passing through $(1,0)$, it was seen that the union of $\{ u=0\}$ with $\fol_{\Sigma}$ (the saturated of $\Sigma$ by $\fol$) contains a neighborhood of $p$. Thus, as before, there is a leaf $L \subset \calk$ intersecting $\Sigma$ at a point $(1 ,z_0)$. Without loss of generality we can suppose that $z_0 \neq 0$. On the other hand, the linear part of the holonomy diffeomorphism $h$ associated to $\{ v=0\}$ is precisely $e^{2\pi i\lambda_1 /\lambda_2}z$. Thus a power of $h$ is tangent to the identity. According to a result of Mattei-Moussu \cite{mamo}, $\fol$ is locally linearizable if and only if the power of $h$ in question coincides with the identity. Hence we suppose for a contradiction that this power is tangent to the identity and different from the identity. In this case, however, it has the form $z + cz^k + \cdots$ with $c\neq 0$. The final contradiction is then obtained as at the end of Lemma~(\ref{4.5lema2}). The proposition is proved.\qed Summarizing the preceding discussion we can suppose that the (reduced) singularities of $\fol$ lying in $\supT$ are of one of the following types: \noindent $\bullet$ a singularity in the Siegel domain. \noindent $\bullet$ an irrational focus. \noindent Note also that Poincar\'e Theorem still implies that an irrational focus is automatically linearizable. Hence, in this case, $\fol$ is locally given by the form $$ \eta = \lambda_1 u dv - \lambda_2 v du $$ with $\lambda_1/\lambda_2 \in \R_+\setminus \Q_+$. It is easy to work out the structure of the foliation $\calh$ near to an irrational focus singularity. This is similar to the discussion carried out in Section~3 whereas technically simpler since $\fol$ is always linearizable. Again on a small neighborhood of $p$ the curves $(\omega)_0$ and $(\omega)_{\infty}$ are supposed to be invariant by $\fol$ (local invariance condition). This means that $\omega$ can be written in local coordinates $u,v$ as \begin{equation} \omega = h (u,v) u^a v^b [ \lambda_1 u dv - \lambda_2 v du] \label{4.5eq1} \end{equation} where $h(0,0) \neq 0$. Setting $\Omega_1 = fdv + gdu$ the equation $d\omega = \omega \wedge \Omega_1$ yields \begin{equation} h (u,v) ( \lambda_2 v f + \lambda_1 ug) = -h(u,v) (\lambda_1 (a+1) + \lambda_2 (b+1)) - u \frac{\partial h}{\partial u} - v\frac{\partial h}{\partial v} \, . \label{4.5eq2} \end{equation} In the sequel we suppose that $a,b$ are not simultaneously equal to~$-1$ so that $\lambda_1 (a+1) + \lambda_2 (b+1) \neq 0$ (recall that $\lambda_1/\lambda_2 \in \R_+ \setminus \Q_+$). By setting $u=0$ (resp. $v=0$) we conclude that the behavior of $h$ over the separatrix $\{ u=0 \}$ (resp. $\{ v=0\}$) is either that of a sink or that of a source according to whether $\lambda_1 (a+1) + \lambda_2 (b+1) >0$ or $\lambda_1 (a+1) + \lambda_2 (b+1) <0$. For the leaves of $\fol$ different from the separatrices, we can perform a discussion similar to the one carried out in Section~3 by exploiting the presence of the ``multi-valued'' first integral $u^{\lambda_2} v^{\lambda_1}$. The reader will easily check that the behavior of $\calh$ over the separatrices is repeated over the general leaves. The result is then summarized by \begin{prop} \label{4.5lema3} Let $p \in {\rm Sing}\, (\fol ) \cap \supT$ be an irrational focus. Consider also local coordinates $u,v$ defined on a bidisc of radius $\epsilon$ about $p$ and suppose that $\omega$ is given by~(\ref{4.5eq1}) where $a,b$ are not simultaneously equal to~$-1$. If $L$ is a leaf of $\fol$, then the restriction of $\calh$ to $L$ consists of lines of length less than ${\rm Const}. \epsilon$ for an appropriate uniform constant ${\rm Const}$. Furthermore these lines converge to $(0,0)$ if $\lambda_1 (a+1) + \lambda_2 (b+1) >0$ (i.e. the end of the leaf correponding to $(0,0)$ is a sink). Similarly these lines are emanated from $(0,0)$ if $\lambda_1 (a+1) + \lambda_2 (b+1) <0$ (i.e. the end of the leaf correponding to $(0,0)$ is a source).\qed \end{prop} \begin{obs} \label{4.5obs1} {\rm An irrational focus $p \in {\rm Sing}\, (\fol ) \cap \supT$ is going to be called a sink (resp. a source) if, with the notations of the lemma above, one has $\lambda_1 (a+1) + \lambda_2 (b+1) >0$ (resp. $\lambda_1 (a+1) + \lambda_2 (b+1) <0$). Sometimes we shall use the expressions sink-irrational focus or source-irrational focus to emphasize that we are dealing with an irrational focus singularity. This terminology also serves to distinguish between singularities of $\fol$ behaving as sinks (or sources) for $\calh$ and sinks (or sources) of $\calh$ occurring at regular points of $\fol$.} \end{obs} To close this section, we are going to introduce a sort of ``generalized Dulac transform'' (or maybe ``compounded Dulac transform'') for the singularities of the foliation $\fol$. This material will be needed in Section~6 since the singularities of the initial foliation $\fol$ (as in the statement of Theorem~A in the Introduction) may be degenerate. Also it should be pointed out that Proposition~(\ref{4.5prop1}) is not used in the following discussion although it will be necessary in Section~6. In fact, the role played by Proposition~(\ref{4.5prop1}) in Section~6 amounts to guaranteeing that the situation considered in the discussion below always occurs. In particular, this will enable us to consider the ``generalized Dulac transform'', cf. below. To explain our concern with this ``generalized Dulac transform'', consider the local situation given by a singularity of $\fol$ that belongs to the Siegel domain. Let $\lambda_1, \lambda_2$ be the eigenvalues of $\fol$ at $p$ and suppose that $\lambda_1 > \lambda_2$. Suppose in addition that $p$ lies away from the divisor $(\omega)_0 \cup (\omega)_{\infty}$ of zeros and poles of $\omega$, where $\omega$ stands for a meromorphic $1$-form defining $\fol$. Let $\textsc{S}_1, \textsc{S}_2$ denote the separatrices of $\fol$ at $p$ that are respectively tangent to the eigendirections associated to $\lambda_1, \lambda_2$. According to the discussion in Section~3, the restriction of $\calh$ to $\textsc{S}_1$ consists of trajectories converging to $p$. Similarly, the restriction of $\calh$ to $\textsc{S}_2$ consists of trajectories emanated from $p$. Thus, if $l$ is a segment of $\calh$-trajectory passing near $p$, the Dulac transform defined by means of $l$ behaves as a contraction (cf. Section~3 and Lemma~\ref{3lema3}). The existence of this contraction is therefore consistent with the principle of producing ``contractive holonomy'' by following the trajectories of $\calh$. However, if $\textsc{S}_1, \textsc{S}_2$ are contained in the divisor $(\omega)_0 \cup (\omega)_{\infty}$, then the orientation of $\calh$ around $p$ may be ``unnatural'' in the sense that the Dulac transform induced by a segment of $\calh$-trajectory as above actually behaves as an expansion (cf. Lemma~\ref{3lema3}). The tension between contraction along the leaves of $\calh$ and expansion for certain Dulac transforms would prevent us from guaranteeing the existence of a contractive holonomy map in a suitable sense. It is to remedy this situation that ``generalized Dulac transforms'' will be introduced. The aim of their study is show that contraction eventually prevails. Without loss of generality, we can assume that $\fol$ is a foliation with reduced singularities defined on a certain compact surface. We also fix a non-closed meromorphic $1$-form $\omega$ defining $\fol$ (which is supposed to exist in our case). Let $(\omega)_0^{\perp\fol}$ (resp. $(\omega)_{\infty}^{\perp \fol}$) be the subdivisor of $(\omega)_0$ (resp. $(\omega)_{\infty}$) consisting of those irreducible components of $(\omega)_0$ (resp. $(\omega)_{\infty}$) that {\it are not}\, invariant by $\fol$. As before we set $(\omega)_0^{\fol} = (\omega)_0 \setminus (\omega)_0^{\perp\fol}$ and $(\omega)_{\infty}^{\fol} = (\omega)_{\infty}^{\fol} \setminus (\omega)_{\infty}^{\perp \fol}$. Let $E$ be a connected component of $(\omega)_0^{\fol} \cup (\omega)_{\infty}^{\fol}$. Modulo performing finitely many blow-ups, we can assume without loss of generality that that $(\omega)_0^{\perp \fol}$ (resp. $(\omega)_{\infty}^{\perp \fol}$) intersects $E$ only at regular points of $\fol$ (cf. Lemma~\ref{revision2} in Section~5 for a detailed explanation of this procedure). The irreducible components of $E$ are going to be denoted by $D_1, \ldots ,D_n$. Let us now consider a leaf $L$ of $\fol$ that accumulates on a singularity $\textsc{P}_0 \in D_1 \subseteq E$. We suppose that $\textsc{P}_0$ belongs to the Siegel domain and that $L$ is not locally contained in the separatrices of $\fol$ at $\textsc{P}_0$. One of these separatrices, $\textsc{S}^{P_0}$, of $\fol$ at $\textsc{P}_0$ is transverse to $E$ (and thus not contained in $E$). The other separatrix of $\fol$ at $\textsc{P}_0$ is obviously contained in $D_1 \subset E$. Next suppose we are given a sequence of singularities of $\fol$ in $E$ verifying the two conditions below: \begin{enumerate} \item Each singularity belongs to the Siegel domain. \item Each singularity corresponds to the intersection of two irreducible components of $E$ (recall that $E$ is already totally invariant by $\fol$). \end{enumerate} \noindent The above mentioned sequence of singularities will be denoted by $\{ p_1, \ldots ,p_k \}$. We suppose that $p_k$ belongs to a component $D_l$ of $E$ (note that $l$ may differ from $k$ since the Dynkin diagram of $E$ is allowed to contain loops). Finally one still has a singularity $\textsc{P}_1 \in D_l$ belonging to the Siegel domain and having a separatrix $\textsc{S}^{P_1}$ transverse to $E$ (the other separatrix of $\fol$ at $\textsc{P}_1$ being contained in $D_l \subset E$). Let $\Sigma_0 ,\Sigma_1$ be local transverse sections at points $z_0 \in \textsc{S}^{P_0}$ and $z_1 \in \textsc{S}^{P_1}$, respectively. Denote by $\mu_0 , \mu_1$ measures on $\Sigma_0 ,\Sigma_1$ representing $T$ over these transversals (as in Lemma~\ref{3lema4}). We want to define the ``generalized Dulac transform'' ${\rm GDul}$ from a domain $W \subset \Sigma_0$ to $\Sigma_1$. This can naturally be done by composing the (ordinary) Dulac transforms associated to the singularities $\textsc{P}_0, p_1 , \ldots ,p_k, \textsc{P}_1$. Proposition~(\ref{4.5prop2}) below makes this definition precise and collect the properties of ${\rm GDul}$ that are going to be used in Section~6. Keeping the preceding notations, we have two further assumptions. \noindent 3. All the singularities $\textsc{P}_0, p_1, \ldots ,p_k, \textsc{P}_1$ satisfy the condition $\lambda_1 (1+a) -\lambda_2 (1+b) \neq 0$ of Lemma~(\ref{3lema1}) and subsequent ones in Section~3. \vspace{0.1cm} \noindent 4. The trajectory $l_{z_0}$ of $\calh$ through $z_0 =\Sigma_0 \cap \textsc{S}^{P_0}$ converges to $\textsc{P}_0$. It then continues to $p_1$ and from $p_1$ to $p_2$ and so on until it reaches $\textsc{P}_1$. From $\textsc{P}_1$ this trajectory leaves $E$ (and thus a small tubular neighborhood of $E$) by following the separatrix $\textsc{S}^{P_1}$. This trajectory is also assumed to pass through $z_1 = \Sigma_1 \cap \textsc{S}^{P_1}$. \noindent For a detailed definition of the trajectories of $\calh$ ``passing through singularities in the Siegel domain'', the reader is referred to the discussion carried out in Section~5. The definition of ${\rm GDul}$ simply consists of the composition of Dulac transforms associated to the singularities in question with ordinary holonomy maps associated to the segments of the leaf of $\calh$ between two such singularities. Finally we have: \begin{prop} \label{4.5prop2} Under the preceding assumption, there is $1> \lambda >0$ with the following properties: \begin{enumerate} \item If $V_0 \subset \Sigma_0$ is a sector of angle less that $2\pi \lambda$ and sufficiently small radius, then ${\rm GDul}: V_0 \rightarrow \Sigma_1$ is well-defined and one-to-one. \item For $v \in V_0$, one has $\Vert {\rm GDul}\, (v) \Vert \sim O (\Vert v \Vert^{1/\lambda})$. Therefore ${\rm GDul}$ is a contraction for $\Vert v \Vert$ small. \item One has $\mu_0 (V_0) = \mu_1 ({\rm GDul}\, (V_0))$, provided that the radius of $V_0$ is small enough. \end{enumerate} \end{prop} \noindent {\it Proof}\,: The statement is clear if the divisor $E$ is empty as an already mentioned consequence of the combination of Lemmas~(\ref{3lema1}), ~(\ref{3lema3}) and~(\ref{3lema4}). Consider now the case $k=0$, i.e. both $\textsc{P}_0$ and $\textsc{P}_1$ belong to $D_1$. Denote by $\lambda^0_1 ,\lambda_2^0$ (resp. $\lambda^1_1 ,\lambda_2^1$) the eigenvalues of $\fol$ at $\textsc{P}_0$ (resp. $\textsc{P}_1$) where $\lambda^0_2$ (resp. $\lambda^1_2$) is the eigenvalue associated to the eigendirection defined by $D_1$. Now let $b \in \Z$ be the order if $D_1$ as a component of the divisor of zeros and poles of $\omega$. Note that the separatrices $\textsc{S}^{P_0}, \textsc{S}^{P_1}$ are not locally contained in the support of this divisor since they are transverse to $E$ (cf. the definition of $E$). Hence there are local coordinates $(u,v)$ (resp. $(w,v)$) around $\textsc{P}_0$ (resp. $\textsc{S}^{P_1}$) in which $\omega$ can be written as \begin{eqnarray} \omega & = & v^b [\lambda_2^0 u (1 + {\rm h.o.t.}) dv + \lambda_1^0 v (1 + {\rm h.o.t.}) du] \, , \label{aqui1} \\ \omega & = & v^b [\lambda_2^1 w (1 + {\rm h.o.t.}) dv + \lambda_1^1 v (1 + {\rm h.o.t.}) dw] \label{aqui2} \end{eqnarray} where $\{ v=0\} \subset D_1$. Since the trajectories of $\calh$ converge towards $\textsc{P}_0$, Lemma~(\ref{3lema1}) ensures that $\lambda^0_2 - \lambda^0_1 (b+1) <0$. Similarly, because those trajectories also leave $\textsc{P}_1$ along $\textsc{S}^{P_1}$, Lemma~(\ref{3lema1}) provides, in addition, that $\lambda_2^1 - \lambda_1^1 (b+1) >0$. On the other hand, recall that eigenvalues are defined only up to a multiplicative constant, so that we can set $\lambda^0_2 = \lambda_2^1$. It then results that $\lambda_1^0 > \lambda_1^1$. The corresponding generalized Dulac transform, however, clearly satisfies $\Vert {\rm GDul}\, (1,u) \Vert = O \Vert u \Vert^{\lambda^0_1 /\lambda^1_1}$. Therefore ${\rm GDul}$ has the desired contracting behavior for $\Vert u \Vert$ small. The second part of the statement can directly be checked. Indeed, there are only two cases according to whether or not $\lambda_2^0 \geq \lambda_1^0$ (in any case we have $\lambda_2^1 \geq \lambda_1^1$ provided that $b \geq 0$). This verification is left to the reader. Let us now consider the case $k=1$. The new element appearing in this situation is the singularity $p_1 = D_1 \cap D_2$. Keeping similar notations, let $b_1$ (resp. $b_2$) denote the order of $D_1$ (resp. $D_2$) as a component of the divisor of zeros and poles of $\omega$. The eigenvalues of $\fol$ at $\textsc{P}_1, \textsc{P}_2$ are still denoted as before. Finally let $\Lambda_1$ (resp. $\Lambda_2$) be the eigenvalue of $\fol$ at $p_1$ associated to the eigendirection given by $D_1$ (resp. $D_2$). Around $\textsc{P}_0$, there are local coordinates $(u,v)$ where $\omega$ is given as in~(\ref{aqui1}) (with $b=b_1$). Around $\textsc{P}_1$, we have local coordinates $(w,t)$, $\{ t=0\} \subset D_2$, where $\wto$ becomes \begin{equation} \omega = t^{b_2} [\lambda_2^1 w (1 + {\rm h.o.t.}) dt+ \lambda_1^1 v (1 + {\rm h.o.t.}) dw] \, . \label{aqui3} \end{equation} Once again Lemma~(\ref{3lema1}) gives us that $\lambda_2^0 - \lambda_1^0 (b_1 +1) <0$ and $\lambda_2^1 - \lambda_1^1 (b_2 +1) >0$. Finally, in the coordinates $(v,t)$ around $p_1$, we obtain $$ \omega = b^{b_1} t^{b_2} [\Lambda_1 t (1 + {\rm h.o.t.}) dv+ \Lambda_2 v (1 + {\rm h.o.t.}) dt] \, . $$ Thanks to Lemma~(\ref{3lema1}), we know that $\Lambda_1 (b_2 +1) - \Lambda_2 (b_1 +1) >0$. To conclude, we first observe that we can set $\Lambda_1 = \lambda_2^0$ and $\Lambda_2 = \lambda^1_2$ since these eigenvalues are defined only up to a multiplicative constant. Therefore one has $$ \lambda_1^0 (b_1+1)(b_2+1) > \lambda^0_2 (b_2 +1) > \lambda_2^1 (b_1 +1) > \lambda_1^1 (b_1+1)(b_2+1) $$ so that $\lambda_1^0 > \lambda_1^1$. In other words, the generalized Dulac transform has the contracting behavior indicated in the statement. Again the verification of item~3 is left to the reader. The general case of $k\in \N$ now follows easily by induction.\qed Actually our proof yields a slightly more general result. To state it let us drop Condition~3 above, i.e. the singularities $\textsc{P}_0, p_1, \ldots ,p_k, \textsc{P}_1$ need no longer to satisfy the condition $\lambda_1 (1+a) -\lambda_2 (1+b) \neq 0$. If $p_i$ is a (Siegel) singularity at which we have $\lambda_1 (1+a) -\lambda_2 (1+b) = 0$, then the restrictions of $\omega_1$ to the local separatrices of $\fol$ at $p_i$ are holomorphic on a neighborhood of $p_i$. This setting includes the case in which the restriction of $\omega_1$ to one (or to both) of these separatrices vanishes identically. Our purpose here is to allow the Dulac transform corresponding to $p_i =D_i \cap D_{i+1}$ to be considered (with orientation going from $D_i$ to $D_{i+1}$) as a component in the constitution of the generalized Dulac transform. The reader will note that the occasional use of the Dulac map in question is consistent with the contents of Remark~\ref{holomorphiconaxes} and it will further be detailed in the next section. Naturally away from the singularities that fail to fulfill the condition $\lambda_1 (1+a) -\lambda_2 (1+b) \neq 0$ we shall always follow the trajectories of $\calh$. Then the proof of Proposition~\ref{4.5prop2} can be repeated word-by-word to provide: \begin{coro} \label{4.5prop2PRIME} Under the preceding assumption the statement of Proposition~\ref{4.5prop2} still holds except that now $1 \geq \lambda >0$. Besides if $\lambda=1$ then ${\rm GDul}$ is defined on every sector $V_0 \subset \Sigma_0$ of angle less than~$2\pi$ (and sufficiently small radius). In the latter case the generalized Dulac transform ${\rm GDul}$ is asymptotically flat at the ``origin of $\Sigma_0$''. \end{coro} \section{Topological dynamics of the trajectories of $\calh$} In the preceding two sections, we have studied the local behavior of $\calh$ around singularities of $\fol$. It is now time to make global considerations on these trajectories. In what follows we consider a holomorphic foliation $\fol$ given by a globally defined meromorphic form $\omega$ on a compact surface $M$. As always we suppose that $\omega$ is not closed and that $\fol$ admits an invariant {\it diffuse}\, positive closed current $T$. Again $\supT$ will denote the support of $T$. Thanks to Seidenberg Theorem, we can assume without loss of generality that the singularities of $\fol$ are all reduced. By virtue of Proposition~(\ref{4.5prop1}) this, in fact, implies that the singularities of $\fol$ in $\supT$ either belong to the Siegel domain or are irrational foci. It is also known that a singularity of $\fol$ in $\supT$ belonging to the Siegel domain is automatically linearizable provided that the quotient of its eigenvalues is rational. As already explained, our strategy consists of following the trajectories of $\calh$ with the purpose of guaranteeing a ``contractive behavior for the corresponding holonomy maps''. If ``enough contraction'' is obtained then we should be able to conclude that $T$ is the current of integration over a compact leaf (cf. for example Lemma~\ref{atomicmass}). It should be noted however that there are many paths, other than trajectories of $\calh$, that tend to produce contraction for the corresponding holonomy maps of $\fol$. These include, for example, the trajectories of $\calh^{\theta}$, $-\pi/2 < \theta < \pi/2$, or suitable combinations of those. Therefore there is a large amount of flexibility to choose ``deformed trajectories'' when a trajectory of $\calh$ approaches a singularity such as a saddle point. Before giving precise definitions of what is meant by ``deformed trajectory'' or by ``trajectory of finite length'', we shall perform a few reductions in our setting so as to make the subsequent discussion more transparent. Let then $\omega, \, \fol$ be as above. Denote by $(\omega)_0^{\fol}$ the sub-divisor consisting of those irreducible components of $(\omega)_0$ that are invariant under $\fol$. Similarly set $(\omega)_0^{\perp \fol} = (\omega)_0 \setminus (\omega)_0^{\fol}$. Denoting by $(\omega)_{\infty}$ the divisor of poles of $\omega$, the subdivisors $(\omega)_{\infty}^{\fol}$ and $(\omega)_{\infty}^{\perp \fol}$ are analogously defined and so are the divisors $(\omega_1)_0^{\fol}, \, (\omega_1)_0^{\perp \fol}$. Let us remind the reader that $(\omega_1)_{\infty}$ is contained in $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$ so that it has no component invariant by $\fol$, cf. Lemma~\ref{newversionSection2.11}. The next lemma allows us to assume some standard ``normalization'' conditions. \begin{lema} \label{revision2} Modulo performing finitely many blow-ups, the conditions below are always satisfied: \begin{enumerate} \item The singular set ${\rm Sing}\, (\fol)$ of $\fol$ is disjoint from $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$ as well as from $(\omega_1)_0^{\perp \fol}$. \item Every irreducible component of $(\omega)_0, \, (\omega)_{\infty}$ and of $(\omega_1)_0$ is smooth. \item The divisor of zeros $(\omega)_0$ does not intersect the divisor of poles $(\omega)_{\infty}$ at regular points of $\fol$. \item Two distinct irreducible components of $(\omega)_0^{\perp \fol}$ (resp. $(\omega)_{\infty}^{\perp \fol}$, $(\omega_1)_0^{\perp \fol}$) are disjoint. \item $\fol$ is transverse to every irreducible component of $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$ or of $(\omega_1)_0^{\perp \fol}$. \end{enumerate} \end{lema} \noindent {\it Proof}. It is clear that the singularities of $\fol$ can be supposed to be reduced (Seidenberg's theorem). Similarly the irreducible components of $(\omega)_0, \, (\omega)_{\infty}$ and of $(\omega_1)_0$ can easily be made smooth. To show that ${\rm Sing}\, (\fol)$ can be made disjoint from $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$, let $\mathcal{C}$ be a local branch of an irreducible component of $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$ passing through $p \in {\rm Sing}\, (\fol)$. By assumption $\mathcal{C}$ is not invariant by $\fol$ so that it has a contact of finite order with the actual separatrices of $\fol$ at $p$. By blowing-up $\fol$ at $p$, the new singularities appearing in the exceptional divisor $\pi^{-1} (p)$ have their positions determined by the tangent spaces at $p$ to the local separatrices of $\fol$. Therefore, after finitely many repetitions of this procedure, the proper transform of $\mathcal{C}$ will no longer pass through any of the resulting singularities of the blown-up foliation. A similar argument applies to the divisor $(\omega_1)_0^{\perp \fol}$. Note also that, in the course of performing the mentioned blow-ups, the ``new components'' of $(\omega)_0, \, (\omega)_{\infty}$ and of $(\omega_1)_0$ that may have been introduced are all contained in the exceptional divisor. Hence they are invariant by the corresponding foliation, i.e. they are not contained in $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol} \cup (\omega_1)_0^{\perp \fol}$. The remaining ``reductions'' are based on the following remark: if a regular point of a foliation is blown-up, then the new foliation still leaves the exceptional divisor invariant. Furthermore this exceptional divisor contains a unique singularity of the blown-up foliation. This singularity is conjugate to the linear singularity with eigenvalues $1,-1$. Consider now a point $p \in M$ regular for $\fol$ where $(\omega)_{\infty}$ intersects $(\omega)_0$ and let $\fol$ be blown-up at $p$. As before, after finitely many blow-ups, the proper transforms of $(\omega)_{\infty}$ and $(\omega)_0$ will be separated. The components added by these blow-ups are all contained in the exceptional divisor and thus are invariant by the corresponding foliation. In particular, if we just wanted to ensure that $(\omega)_{\infty}^{\perp \fol}$ does not intersect $(\omega)_0^{\perp \fol}$ at a regular point this would be enough. For the general case, it suffices to note that the order of the exceptional divisor resulting from a single blow-up is the difference between the orders of the components of $(\omega)_{\infty}$ and of $(\omega)_0$ that pass through the center of the blow-up. Thus after finitely many repetitions, there will appear a exceptional divisor which is {\it regular for $\omega$}\, in the sense that it is not contained in either $(\omega)_{\infty}$ or $(\omega)_0$. This leads to the verification of item~3. The same reasoning allows us to obtain item~4 as well. Finally, as to item~5, let $D$ be an irreducible component of $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$ or of $(\omega_1)_0^{\perp \fol}$. We need to check that $\fol$ can be made transverse to $D$. Thanks to the preceding items, we can assume that $D \cap {\rm Sing}\, (\fol) =\emptyset$. Next observe that the number of tangencies between $\fol$ and $D$ is finite since$D$ is not invariant by $\fol$ (and of course every tangency has finite contact). Thus once again we only need to blow-up tangency points sufficiently many times. As always the exceptional divisors added in the procedure are all invariant by the foliation and thus do not destroy the previous ``reductions''. This completes the proof of the lemma.\qed The behavior of $\calh$ near points in $(\omega_1)_{\infty} = (\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol}$ is clear. However the behavior of $\calh$ near points in $(\omega_1)_0$ needs further comments and, in particular, leads to the notion of ``deformed trajectory''. It is convenient to identify three {\it critical regions}\, where the trajectories of $\calh$ will be allowed to be deformed. These are as follows. \noindent {\bf First critical region}: The divisor $(\omega_1)_0^{\perp \fol}$. \noindent Let $C_j^0$, $j=1,\ldots ,l$, denote the irreducible components of the zero divisor $(\omega_1)_0^{\perp \fol}$ of $\omega_1$. Recalling that every $C_j^0$ is smooth and transverse to $\fol$, we can find a small ``tubular neighborhood'' $\Vv_j$ of $C_j^0$ whose boundary $\partial \Vv_j$ is still transverse to $\fol$, for every $j=1,\ldots ,l$. Besides, for $j$ fixed, we also assume that the intersection of $\Vv_j$ with $(\omega_1)_0$ is reduced to $C_j^0$ (cf. item~4 of Lemma~\ref{revision2}). Finally set $\Vv = \bigcup_{j=1}^l \Vv_j$. If $p \in \partial \Vv$, we can suppose without loss of generality that the leaf $L_p$ of $\fol$ through $p$ ``locally slices'' $\Vv$ into a connected disc. The collection of these ``discs'' form the fibers of a differentiable submersion $\Vv \rightarrow (\omega_1)_0$. Next let $p \in \partial \Vv$ and let $D_p \subset L_p$ be the above mentioned disc, i.e. $D_p$ is the connected component containing $p$ of $L_p \cap \Vv$. The structure of the trajectories of $\calh$ on $D_p$ is described by Lemma~\ref{blm2}. Denoting by $\calh_{\vert D_p}$ the restriction of $\calh$ to $D_p$, it follows the existence of $2m$ separatrices for $\calh_{\vert D_p}$ at $\textsc{Q} \simeq D_p \cap (\omega_1)_0$, $m \geq 2$. These separatrices are divided into two groups. Namely there are $m$ separatrices over which one converges to $\textsc{Q}$ by moving in the sense of their orientation (these separatrices are said {\it to approach $\textsc{Q}$}). The remaining $m$ separatrices are such that one converges to $\textsc{Q}$ by moving in the sense opposite to their orientation (these separatrices are said {\it to leave $\textsc{Q}$}). In addition, the total picture is symmetric by a rotation group of order $m$. In this situation, if $l$ is for example a separatrix approaching $\textsc{Q}$, we allow $l$ to be continued in the ``future'' by following one of the separatrices of $\textsc{Q}$ that leaves $\textsc{Q}$. The chosen separatrix can for example be one of the two separatrices that are ``closest'' to $l$ but this is not necessary. More generally every trajectory of $\calh$ entering the neighborhood $\Vv$ is allowed to be continued by ``following'' any of the separatrices of $\calh$ leaving $\textsc{Q}$. \noindent {\bf Second critical region}: Siegel singularities of $\fol$ such that the restriction of $\omega_1$ to its local separatrices is holomorphic. Fixed a Siegel singularity as above, let $V_0$ be a neighborhood of it similar to the one defined before the statement of Lemma~\ref{3lema3}. In particular we have sets $A = \{ (u,v) \in \C^2 \; ; \; \vert u \vert =\epsilon_1 \; \; {\rm and} \; \; \vert v \vert < \epsilon_2 \}$ and $B = \{ (u,v) \in \C^2 \; ; \; \vert v \vert =\epsilon_2' \; \; {\rm and} \; \; \vert u \vert < \epsilon_1' \}$. To fix notations let $l$ be an oriented trajectory of $\calh$ entering this neighborhood at an intersection point of $l$ and $A$ (i.e. $l$ is ``close'' to the axis $\{ v=0\}$). Denote by $L$ the leaf of $\fol$ containing $l$. Since the restriction of $\omega_1$ to $\{v=0\}$ is holomorphic at the origin, the trajectory $l$ can be deformed {\it inside}\, $L$ to avoid crossing the set $A$ (i.e. this trajectory can be deformed so as to stay away from the singularity itself). This deformation is similar to the deformations performed in the case of saddle singularities of $\calh$ that appear in connection with the divisor $(\omega_1)_0^{\perp \fol}$. In particular the continuation of $l$ will stay ``close to $\{v=0\}$'' during the procedure. In fact, this trajectory will leave the singularity by ``following'' one of the separatrices of $\calh_{\vert \{v=0\}}$ that leave the singularity in question (where $\calh_{\vert \{v=0\}}$ stands for the restriction of $\calh$ to $\{v=0\}$). Another possibility to defined continuations for $l$ is to let $l$ enter the neighborhood of the mentioned singularity and then use the corresponding Dulac transform to continue $l$ as a trajectory of $\calh$ that is now ``close to $\{u=0\}$''. In this case the desired continuation of $l$ will be ``close'' to one of the separatrices of $\calh_{\vert \{u=0\}}$ oriented so as to leave the mentioned singularity (where $\calh_{\vert \{u=0\}}$ stands for the restriction of $\calh$ to $\{u=0\}$). Summarizing it can be said that a trajectory $l$ of $\calh$ intersecting the set $A$ admits all the above mentioned continuations. \begin{obs} \label{localregions} {\rm In the preceding two types of critical regions the ``deformation'' of the trajectory $l$ consists of adding to it a ``small'' segment of trajectory of $\calh^{\perp}$. By construction these pieces of $\calh^{\perp}$-trajectories have length bounded by a ``small constant'' and besides they are strictly comprised between two ``genuine'' segments of $\calh$-trajectories whose lengths are bounded from below by positive constants depending solely on $\fol, \, M$. As a consequence these ``deformations'' do not disrupt the global contractive nature of holonomy maps of $\fol$ defined by means of ``deformed trajectories of $\calh$''. We shall return to this point below.} \end{obs} \begin{obs} \label{localregionsandirrationalfoci} {\rm Besides singularities belonging to the Siegel domain also irrational focus singularities may be considered. Recall that an irrational focus singularity is linearizable and hence it possesses exactly two separatrices. These separatrices are smooth and may be chosen as the coordinate axes in the linearizing coordinates. The fact that the quotient between the eigenvalues of these singularities cannot be a rational number implies that the only way in which the restriction of $\omega_1$ to these separatrices may be holomorphic occurs when both separatrices are components with multiplicity~$1$ of $(\omega)_{\infty}$. This case will rarely occurs, but if it does, the trajectories of $\calh$ will be deformed so as to avoid the singularity in the same way it may be done for analogous Siegel singularities. Since irrational foci have no associated Dulac transforms only this type of continuation will be allowed in the present case.} \end{obs} \noindent {\bf Third critical region}: The divisor $(\omega_1)_0^{\fol}$. By construction the support of the divisor $(\omega_1)_0^{\fol}$ consists of (irreducible) curves invariant by $\fol$. Let $C$ denote one of these curves. Then the restriction of $\omega_1$ to $C$ vanishes identically so that it does not define any real foliation on $C$. Nonetheless Poincar\'e Lemma can still be applied to this situation. In fact, let $c:[0,1] \rightarrow C$ be a path contained in $C$ and consider local transverse sections $\Sigma_{c(0)}, \, \Sigma_{c(1)}$ through $c(0), \, c(1)$ respectively. If the sections $\Sigma_{c(0)}, \, \Sigma_{c(1)}$ are parameterized as indicated in Section~2.3, then the holonomy map ${\rm Hol}\, (c) : \Sigma_{c(0)} \rightarrow \Sigma_{c(1)}$ obtained from $c$ and $\fol$ is such that $[ {\rm Hol}\, (c) ]'(0) =1$. In particular the usual holonomy group associated to the ``leaf'' $C$ with respect to $\fol$ is entirely constituted by local diffeomorphisms tangent to the identity. Since the foliation $\calh$ is not defined on $C$, we shall allow every (``minimizing geodesic'') path joining two points of $C$ with length less than the diameter of $C$ to be used to continue a given trajectory $l$ of $\calh$. Here both ``length'' of the path and ``diameter'' of $C$ arise from fixing once and for all some auxiliary Hermitian metric on $M$. To better explain the above definition, consider two Siegel singularities $\textsc{P}, \textsc{Q}$ of $\fol$ lying in $C$. Denote by $S_{p}$ (resp. $S_{q}$) the local separatrix of $\fol$ transverse to $C$ at $\textsc{P}$ (resp. $\textsc{Q}$). Also fix neighborhoods $V_{p}, \, V_{q}$ of $\textsc{P}, \textsc{Q}$ as in the case of discussed in the second critical region. Since $\omega_1$ vanishes identically on $C$, it follows that the restriction of $\omega_1$ to $S_{p}$ (resp. $S_{q}$) is holomorphic. If $l \subset S_{p}$ is a trajectory of $\calh$ that enters $V_{p}$, then $l$ can be continued as a trajectory $l'$ of $\calh$ {\it contained in $S_{q}$ and oriented so as to leave the neighborhood $V_{q}$}. A similar convention applies to trajectories $l$ of $\calh$ that are not contained in $S_{p}$ but that still enters the neighborhood $V_{p}$ of $\textsc{P}$. A continuation $l'$ for the trajectory $l$ will be such that $l, l'$ are contained in the same global leaf $L$ of $\fol$ and $l'$ leaves the neighborhood $V_{q}$ of $\textsc{Q}$. In other words the continuation of these trajectories can be pictured as if the curve $C$ were collapsed into a single point (heuristically imagined as a Siegel singularity whose separatrices would be $S_{p}, \, S_{q}$). Then the mentioned continuation would be defined as in the case of the second critical region discussed above. \begin{obs} \label{localregionsandirrationalfociPRIME} {\rm In line with Remark~\ref{localregionsandirrationalfoci}, the use of Dulac transforms to follow a component $C$ of $(\omega_1)_0^{\fol}$ as above is only possible at a Siegel singularity i.e. no irrational focus singularity lying in $C$ will be associated with continuation of trajectories by means of Dulac transforms.} \end{obs} We are now ready to define {\it global deformed trajectories of $\calh$}. Away from a fixed neighborhood of the three critical regions previously discussed, a deformed trajectory must agree with an ordinary trajectory of $\calh$. However if a trajectory $l$ of $\calh$ enters a critical region, then it possesses all the corresponding continuations mentioned above. As a consequence, every possible continuation of $l$ will eventually leave the critical region in question and become again an ordinary trajectory of $\calh$. In particular, given $p \in M$, the deformed trajectory of $\calh$ through $p$ is in general not uniquely determined. It is convenient to think of it not as a ``single path'' but as a collection of paths that ramifies whenever one of its branches enters a critical region. In other words, the $\calh$-trajectory through $p$ is in general not a single path but rather a collection of paths (or branches) that are allowed to ramify at the critical regions. Similar definitions apply if we decide to follow a (deformed) trajectory $l$ of $\calh$ in the direction opposite to its orientation (i.e. when the ``past'' of $l$ is considered). More generally for $\theta \in (-\pi/2 ,\pi/2)$ fixed, the deformed trajectories of $\calh^{\theta}$ are analogously defined and the same remark concerning orientation can be done to define their ``continuations in the past''. To fully define what will be understood by a deformed trajectory of $\calh$ (or $\calh^{\theta}$ we still need to clarify what to do when an ordinary trajectory becomes close to the remaining ``singularities'' of $\calh$ (or of $\calh^{\theta}$). However before doing this, it is important to point out that deformed trajectories as considered above are such that the corresponding holonomy maps of $\fol$ still keep the contractive behavior characteristic of ordinary trajectories of $\calh$ (or $\calh^{\theta}$ for $\theta \in (-\pi/2 ,\pi/2)$). As in Section~2.3, recall that we have fixed an auxiliary Hermitian metric on $M$ so that it is possible to consider the length of paths contained in $M$. For those paths whose images are contained in leaves of $\fol$, their resulting lengths are also comparable with the sum of the lengths of their representatives in a fixed foliated atlas of $M$, cf. Section~2.3. Next let $\theta \in (-\pi/2 ,\pi/2)$ be fixed and consider a path $c:[0,1] \rightarrow L \subset M$ parameterizing a segment of deformed trajectory of $\calh$ (resp. $\calh^{\theta}$ for fixed $\theta \in (-\pi/2 ,\pi/2)$) as above. Thus $c$ can be viewed as a concatenation of paths $c^i$ that either are contained in a critical region or are segments of (ordinary) $\calh$-trajectories (resp. $\calh^{\theta}$-trajectories) away from the critical regions and from the remaining singularities of $\calh, \, \calh^{\theta}$. In the latter case, the length of $c^i$ is bounded from below by a positive constant. On the other hand a path $c^i$ whose image is contained in a critical region is such that $c^{i-1}$ and $c^{i+1}$ parameterize an ordinary segment of $\calh, \calh^{\theta}$ (lying in a compact part of the complement of the critical regions and of the remaining singularities of $\calh, \, \calh^{\theta}$). Furthermore these paths $c^i$ are such that their length is uniformly bounded and, besides, the holonomy map of $\fol$ obtained by means of $c^i$ (with respect to suitable transverse sections parameterized as indicated in Section~2.3) is a holomorphic diffeomorphism whose linear part has modulo equal to~$1$. Then the preceding discussion can be summarized by Proposition~\ref{betterthanThmblm} below, which ensures the exponential decay of the norm of the derivative at $c(0)$ with the length of $c$. \begin{prop} \label{betterthanThmblm} Consider a path $c: [0,1] \rightarrow L$ that parametrizes a segment of deformed trajectory of $\calh$ or, more generally, of $\calh^{\theta}$ ($-\pi/2 < \theta < \pi/2$). Then there are constants $C, k$ depending solely on $\theta$ (for $M, \fol, \omega$ and the auxiliary Hermitian metric fixed) such that the estimate below holds \begin{equation} \vert ({\rm Hol}\, (c))' (0) \vert \leq C \exp \, (-k \, {\rm length}\, (c) /2) \; , \label{4eq1} \end{equation} where ${\rm Hol}\, (c)$ stands for the holonomy map of $\fol$ induced by $c$.\qed \end{prop} Let us now complete the definition of deformed trajectories for $\calh$. First note that the ``remaining singularities'' of $\calh, \calh^{\theta}$ are provided by either singular points of $\fol$ (different from Siegel singularities since these were already taken into account) and by the divisors $(\omega)_0^{\perp \fol}$ and $(\omega)_{\infty}^{\perp \fol}$. For the purposes of this paper however, Proposition~\ref{4.5prop1} allows us to rule out hyperbolic singularities as well as saddle-node singularities from the discussion below. Unless otherwise stated, in what follows we shall simply say {\it trajectory of $\calh$ (resp. $\calh^{\theta}$)}\, instead of ``deformed trajectory of $\calh$ (resp. $\calh^{\theta}$)''. Hence for $p \in \calk$, let $l_p$ denote the trajectory of $\calh$ through $p$ (in precise words, this means a deformed trajectory of $\calh$ through $p$) and consider the leaf $L$ of $\fol$ containing $l_p$. Let us first introduce the notion of {\it endpoint}\, for $l_p$. The trajectory $l_p$ is said to have an {\it endpoint}\, at a point $q \in M$ if one of the following possibilities hold: \begin{itemize} \item $q \in (\omega)_0^{\perp \fol}$ is a sink of $\calh_{\vert L}$ and $\overline{l}_p^+ = q$. \item $q \in (\omega)_{\infty}^{\perp \fol}$ is a source of $\calh_{\vert L}$ and $\overline{l}_p^- = q$. \item $q$ is a sink-irrational focus (resp. source-irrational focus) singularity of $\fol$ to which $l_p$ converges (resp. from which $l_p$ is emanated, cf. Lemma~\ref{4.5lema3} and Remark~\ref{4.5obs1}). \end{itemize} Similar definitions apply to the case of $\calh^{\theta}$-trajectories, $\theta \in (-\pi/2 ,\pi/2)$. Sometimes we shall use the expressions {\it future end}\, (resp. {\it past end}) to refer to the cases above which are concerned with a sink-like (resp. source-like) endpoint of $l_p$. To define the trajectory of $l_p$ of $\calh$ through $p$ we start with the ordinary trajectory of $\calh$ through $p$. Whenever this trajectory enters one of the above described critical regions, all its resulting ramifications are considered together as its continuations. Thus it is possibly more convenient to speak about {\it branches} of $l_p$. In this case a branch of $l_p$ has a future endpoint if its converge to a point $q$ of $M$ that behaves locally as a sink for $\calh$. In view of the preceding $q$ either belongs to $(\omega)_0^{\perp \fol}$ or it is a sink-irrational focus singularity of $\fol$. Naturally we can also follow the trajectory $l_p$ in the sense opposite to its standard orientation. In this case, we shall denote the resulting semi-trajectory by $l_p^-$ (in certain cases where the context might be unclear, the semi-trajectories through $p$ with the usual orientation will also be denoted by $l_p^+$). Past endpoints for a branch of $l_p^-$ is then analogously defined and so are future and past endpoints for branches of the trajectories $l_p^{\theta} = l_p^{\theta, +}$ and $l_p^{\theta, -}$ of $\calh^{\theta}$ through $p$. The length of a branch of the semi-trajectory $l_p^+$ is defined in natural differential geometric terms for the auxiliary Hermitian metric fixed from the beginning provided that the branch is finite. Otherwise the branch is said to be of infinite length. Now the semi-trajectory $l_p^+$ of $\calh$ (resp. $\calh^{\theta}$) through $p$ is said to be {\it finite}\, if and only the supremum of the length of all its branches is finite. In this case the number of branches of $l_p^+$ is itself finite so that the supremum is also attained. Once again the definition of length for the semi-trajectory $l_p^-$ can analogously be given. Finally the deformed trajectory $l_p$ of $\calh$ (resp. $\calh^{\theta}$) through $p$ will be called finite if both semi-trajectories $l_p^+$ and $l_p^-$ are so. The length of $l_p$ will then be the maximum of the lengths of all branches contained in $l_p$. \begin{obs} {\rm If a branch of a $\calh$-trajectory $l_p$ (resp. $\calh^{\theta}$-trajectory $l_p^{\theta}$) consists of a loop, possibly passing through critical regions, then this branch contains neither future nor past endpoints. It then follows that its length is infinite. This means that one is allowed to go around the loop infinitely many times what explains why the length of the loop must be considered as infinite. This is very natural as definition since the holonomy of $\fol$ associated to one of these trajectories is clearly hyperbolic. In fact, with our terminology, the length of $l_p$ is finite if and only if all branches of $l_p$ possess both future and past ends and, in addition, the supremum of the lengths of these branches is finite. It is also clear that the above definition is invariant by blow-ups/blow-downs. Therefore the length of the trajectories of $\calh$ (resp. $\calh^{\theta}$) can be considered whether or not the foliation $\fol$ has reduced singularities. More generally, this definition makes sense whether or not the normalizing conditions of Lemma~\ref{revision2} are satisfied.} \end{obs} With the above terminology, the contents of Proposition~\ref{betterthanThmblm} can be complemented by the following simple generalization of Theorem~\ref{blm} that is better adapted to our needs. Let $K$ be a compact part of the complement of the singular set of $\fol$ and consider a path $c: [0,1]\rightarrow K$ parameterizing a segment of deformed $\calh$-trajectory (resp. $\calh^{\theta}$-trajectory, $\theta \in (-\pi/2 ,\pi/2)$). Finally let ${\rm Hol}\, (c)$ denote the holonomy map of $\fol$ induced by $c$ and recall that these maps are identified with local diffeomorphisms of $\C$ by means of transverse sections $\Sigma_{c(0)}, \, \Sigma_{c(1)}$ parameterized by $\omega$ (cf. Section~2.3). Then we have: \begin{teo} \label{Totalversionofblm} With the preceding notations there is $\delta >0$ (depending only on $K$) and constants $C, k > 0$ such that the following holds: \begin{enumerate} \item ${\rm Hol}\, (c)$ is defined on the transverse disc $B_{c(0)} (\delta )$ of radius $\delta > 0$ about $c(0)$. \item The image $( {\rm Hol}\, (c)) (B_{c(0)} (\delta ))$ of $B_{c(0)} (\delta )$ by ${\rm Hol}\, (c)$ is contained in a transverse disc $B_{c(1)} (r)$ of radius $r$ about $c(1)$ where $$ r \leq C \exp \, (-k \, {\rm length}\, (c) /2) \, . $$ \end{enumerate} More generally if $c$ parameterizes a segment of deformed trajectory of $\calh^{\theta}$, $\theta \in (-\pi/2 ,\pi/2)$, then the statement still holds, only the values of the constants $\delta, C, k$ will depend further on $\theta$. \end{teo} \noindent {\it Proof}\,: Fix a finite covering of $K$ by foliated coordinates of $\fol$ along with transverse sections parameterized by $\omega$ as indicated in Section~2.3. According to Proposition~\ref{betterthanThmblm}, there are constants $C_1, k_1$ such that the absolute value of the derivative of ${\rm Hol}\, (c)$ at $c(0)$ satisfies the estimate $$ \vert ({\rm Hol}\, (c))' (0) \vert \leq C_1 \exp \, (-k_1 \, {\rm length}\, (c) /2) \, . $$ As in \cite{blm}, this estimate allows us to show that ${\rm Hol}\, (c)$ is defined on a uniform domain $B_{c(0)} (\delta )$. To check the rest of the statement, note that ${\rm Hol}\, (c)$ is univalent on its domain of definition. Thus modulo reducing this domain, K\"oebe's theorem (cf. \cite{unival}) can be applied to ensure that ${\rm Hol}\, (c)$ has ``bounded distortion'' so that the diameter of its image can be estimate from the value of its derivative at $c(0)$. This completes the proof of the theorem for $\theta=0$. The general case is however totally analogous.\qed From now to the rest of the paper we fix a closed set $\calk \subseteq \supT$ that is minimal for $\fol$. Denote by $\calh_{\calk}$ (resp. $\calh^{\theta}_{\calk}$) the restriction of $\calh$ (resp. $\calh^{\theta}$) to $\calk$. As it is usually the case, in the sequel the word ``trajectory'' actually means ``deformed trajectory''. Let us close this section with the following proposition: \begin{prop} \label{4prop1} The following alternative holds: \begin{itemize} \item There is a uniform constant $C$ (resp. $C^{\theta}$) such that the length of every trajectory of $\calh_{\calk}$ (resp. $\calh^{\theta}_{\calk}$) is less than $C$ (resp. $C^{\theta}$). \item There is a non-empty compact set $\calk^0 \subseteq \calk$, invariant by $\calh$ (resp. $\calh^{\theta}$), where all the corresponding trajectories of (the restriction of) $\calh$ (resp. $\calh^{\theta}$) have infinite length. \end{itemize} \end{prop} \begin{obs} \label{calhinvariant} {\rm The invariance of $\calk^0$ by $\calh$ (resp. $\calh^{\theta}$) means that through each point of $\calk^0$ there passes {\it a branch of trajectory of $\calh$ (resp. $\calh^{\theta}$)}\, which is entirely contained in $\calk^0$. Since, in general, the trajectory of $\calh$ (resp. $\calh^{\theta}$) through $p$ is constituted by several branches, it may well happen that some of them are not fully contained in $\calk^0$.} \end{obs} \vspace{0.2cm} \noindent {\it Proof of Proposition~(\ref{4prop1})}\,: In the sequel we suppose that the conditions of Lemma~\ref{revision2} are satisfied. It suffices to check the statement for the foliation $\calh=\calh^0$. Denote by ${\rm Irr}_+ (\fol)$ (resp. ${\rm Irr}_- (\fol)$) the irrational focus singularities of $\fol$ in $\calk$ that behave as a sink (resp. source) for $\calh$ in the sense of Lemma~(\ref{4.5lema3}). Given one such singularity $p$, let $B_p (\epsilon)$ be the real $3$-dimensional ball of radius $\epsilon>0$ about $p$. It is easy to see that $\fol$ is transverse to this ball for $\epsilon$ sufficiently small. In fact, also the foliation $\calh$ is transverse to this ball as it easily follows from Proposition~(\ref{4.5lema3}). Let then ${\rm Irr}_+^{\epsilon} (\fol)$ be the union of these balls about the points in ${\rm Irr}_+ (\fol)$. The set ${\rm Irr}_-^{\epsilon} (\fol)$ is analogously defined. Suppose that all the (deformed) $\calh$-trajectories contained in $\calk$ are of finite length. We are going to show the existence of a uniform bound for all the corresponding lengths. If this bound did not exist, then there would be a sequence $\{ l_i\}_{i \in \N}$ of branches of $\calh$-trajectories contained in $\calk$ such that the sequence formed by their corresponding lengths goes off to infinity. For each $i$, let $c_i (a_i , b_i) \subset \R \rightarrow M$ be a parametrization of $l_i$. Naturally $c_i (a_i)$ belongs to $(\omega)_{\infty} \cup {\rm Irr}_- (\fol)$ whereas $c_i (b_i)$ belongs to $(\omega)_0 \cup {\rm Irr}_+ (\fol)$. Thus we conclude that, in fact, $c_i (a_i) \in (\omega)_{\infty}^{\perp \fol} \cup {\rm Irr}_- (\fol)$ and $c_i (b_i) \in (\omega)_0^{\perp \fol} \cup {\rm Irr}_+ (\fol)$. Modulo passing to a subsequence, we can suppose that the $c_i (a_i)$ (resp. $c_i (b_i)$) converge to a point $a \in \calk$ (resp. $b \in \calk$). One of the following two possibilities must occur: \noindent 1. $a$ (resp. $b$) belongs to $D_{\infty} \subseteq (\omega)_{\infty}$ (resp. $b \in D_{0} \subseteq (\omega)_0$) where $D_{\infty}$ (resp. $D_0$) stands for an irreducible component of $(\omega)_{\infty}$ (resp. $(\omega)_0$). \noindent 2. $a$ (resp. $b$) belongs to ${\rm Irr}_- (\fol)$ (resp. ${\rm Irr}_+ (\fol)$). In this case, modulo shortening the length of $c_i$ by a uniform small constant (cf. Proposition~\ref{4.5lema3}) we can replace $a_i$ (resp. $b_i$) by $a_i'$ (resp. $b_i'$) such that $c_i (a_i') \in {\rm Irr}_-^{\epsilon} (\fol)$ (resp. $c_i (b_i) \in {\rm Irr}_+^{\epsilon} (\fol)$). Therefore we can consider without loss of generality that $a \in {\rm Irr}_-^{\epsilon} (\fol) \cup {\rm Irr}_+^{\epsilon} (\fol)$. Consider the leaf $L_a$ of $\fol$ through $a$ and note that the restriction of $\calh$ to $L_a$ is well-defined (it is not fully constituted be a critical region of third type). In particular, there is a trajectory $l_a^+$ of $\calh$ being emanated from $a$. Although this trajectory possibly consists of several branches, due to ramification at critical regions, it contains one special branch defined as follows: whenever the (branch of the) trajectory in question enters a critical region, its continuation is dictated by the continuations of the $l_i$'s (for $i$ large enough). The resulting branch $l_a^+$ clearly has infinite length. Otherwise leaves emanated from points sufficiently close to $a$, and choosing appropriate ramifications at critical regions, would have bounded length and this would contradict our assumption. The preceding discussion also shows the existence of semi-trajectories of infinite lengths provided that the first case in the statement of the proposition does not occur. Let then $l^+$ denote a branch of infinite length contained in some deformed trajectory in $\calk$. Since $\calk$ is compact, the closure $\overline{l}^+ \subset \calk$ of $l^+$ is not empty. Besides every semi-trajectory through a point of $\overline{l}^+$ is infinite, or in more accurate terms, it contains a branch of infinite length. In fact, if all branches of a deformed trajectory through a point $p \in \overline{l}^+ \subset \calk$ were of finite length, then the above argument would imply that $\overline{l}^+$ intersects $(\omega)_0^{\perp \fol} \cup {\rm Irr}_+^{\epsilon} (\fol)$. This is however impossible since it contradicts the infinite length of $l^+$. In other words, $\calk^0 = \overline{l}^+$ satisfies the condition in the second alternative of our statement. The proposition is proved.\qed Since $\calk^0=\overline{l}^+$ and $(\omega)_0^{\perp \fol} \cup {\rm Irr}_+^{\epsilon} (\fol)$ are compact disjoint, there is a positive distance between them. Also, by construction, $l^+$ cannot accumulate (in the future) on $(\omega)_{\infty}^{\perp \fol} \cup {\rm Irr}_-^{\epsilon} (\fol)$ thanks to Lemma~(\ref{blm1}) and Lemma~(\ref{4.5lema3}). Thus we obtain: \begin{coro} \label{4coro1} Suppose that the first alternative in Proposition~(\ref{4prop1}) is not verified. Then there is a small open neighborhood $V$ of $(\omega)_0^{\perp \fol} \cup (\omega)_{\infty}^{\perp \fol} \cup {\rm Irr}_-^{\epsilon} (\fol) \cup {\rm Irr}_+^{\epsilon} (\fol)$ such that $\calk^0 \cap V =\emptyset$. In particular all the singularities of $\fol$ lying in $\calk^0$ are in the Siegel domain unless are irrational foci as in Remark~\ref{localregionsandirrationalfoci}. \end{coro} \section{Invariant currents vs. infinite trajectories of $\calh$} The remaining two sections are devoted to proving the theorems stated in the Introduction. We keep the context and the notations of Section~5. Recalling that $\calk$ stands for a minimal set of $\fol$ contained in the support of $T$, the restriction of $\calh$ to $\calk$ is going to be denoted by $\calh_{\calk}$. Let us begin by rephrasing Theorem~A: \begin{teo} \label{fim1} Let $\fol$ and $T$ be as above. If $\calk$ does not contain a compact leaf of $\fol$, then all deformed $\calh$-trajectories (resp. $\calh^{\theta}$-trajectories with fixed $\theta \in (-\pi/2, \pi/2)$) in $\calk$ have length smaller than some positive constant ${\rm Const}$. \end{teo} It suffices to prove the statement for $\calh =\calh^0$ since the generalization to $\calh^{\theta}$ is very straightforward. Thus let us suppose that the lengths of the $\calh$-trajectories in $\calk$ are not uniformly bounded. Our aim will then be to ensure the existence of an algebraic curve contained in $\calk$. Since the lengths of the $\calh$-trajectories in $\calk$ are not uniformly bounded, we can consider a compact set $\calk^0 \subset \calk$ satisfying the conclusions of Proposition~\ref{4prop1} and Corollary~\ref{4coro1}. Moreover, by applying Zorn Lemma, we can assume without loss of generality that $\calk^0$ is {\it minimal}\, for $\calh$ i.e. through every point in $\calk^0$ there passes a branch of (deformed) trajectory of $\calh$ which is dense in $\calk^0$ (the branch being obviously contained in $\calk^0$). Here it is worth pointing out that the assumption that $\calk^0$ is minimal is not indispensable for our discussion (and so the use of Zorn Lemma can also be avoided). In fact, it would be enough to consider an accumulation point of a $\calh$-trajectory $l^+ \subset \calk$ of infinite length that happens to be regular for $\calh$. Because of Theorem~\ref{Totalversionofblm} the holonomy maps of $\fol$ induced by (segments of) $l^+$ is defined on a uniform domain. Thus if $l^+$ accumulates on a regular point of $\calh$ (and of $\fol$) this trajectory will be captured by the holonomy maps to which it gives rise (modulo a slight local deformation of $l^+$). The latter statement would be sufficient for our purposes. Yet it is simpler to assume that $\calk^0$ is minimal so that a self-accumulating $\calh$-trajectory $l^+$ can be selected. Next let us perform on $\fol$ the normalizations described in Lemma~\ref{revision2}. Since these transformations include the blowing-up of points, they give rise to {\it compact leaves}\, contained in the closure of the transform of $\calk$. The curves obtained in this way however {\it do not form loops}\, and this will be exploited in the sequel, cf. below. More generally let $A_{\calk}$ denote the union of all algebraic curves contained in transform of $\calk$. Actually, by an abuse of notation, the closure of the proper transform of $\calk$ will still be denoted by $\calk$. In this sense, $\calk$ is no longer minimal for $\fol$ but it satisfies the following condition: every leaf of $\fol$ in $\calk$ that is not dense in $\calk \setminus A_{\calk}$ is necessarily contained in $A_{\calk}$ itself. This condition is going to be used in the sequel. To give more accurate statements, consider all irreducible compact leaves of $\fol$ contained in $\calk$. Obviously we can assume there are finitely many $D_1, \ldots ,D_r$ of those. Besides $A_{\calk} = D_1 \cup \cdots \cup D_r$. We shall say that these curves {\it contain a loop}\, if there are pairwise distinct points $p_{i_1}, \ldots , p_{i_s}$ with $p_{i_j} \in D_{i_j} \cap D_{i_{j+1}}$ for $1\leq j <s$ and $p_{i_s} \in D_{i_s} \cap D_{i_1}$. We can now state a sharper form of Theorem~(\ref{fim1}). \begin{teo} \label{fim2} Let $\fol$ and $T$ be as in Theorem~\ref{fim1}. Suppose that $\calk$ contains only finite many irreducible compact curves $D_1, \ldots ,D_r$ invariant by $\fol$ and that these curves do not contain loops. Suppose also that the remaining leaves of $\fol$ are dense in $\calk \setminus A_{\calk}$. Then all (deformed) $\calh$-trajectories in $\calk$ have length smaller than some positive constant ${\rm Const}$. An analogous statement is valid for the trajectories of $\calh^{\theta}$-trajectories, $\theta \in (-\pi/2, \pi/2)$. \end{teo} Theorem~\ref{fim1} is an immediate consequence of Theorem~\ref{fim2}. In fact, in the context of Theorem~\ref{fim1} (before performing the normalizations associated with Lemma~\ref{revision2}), we assume that the lengths of all deformed $\calh$-trajectories contained in $\calk$ are not uniformly bounded. In particular there is a compact set $\calk^0 \subset \calk$ and a self-accumulating infinite brach $l^+$ of a deformed $\calh$-trajectory that is contained in $\calk^0$. To pass from this situation to the context of Theorem~\ref{fim2}, let us now perform the normilzations of Lemma~\ref{revision2}. in view of Theorem~\ref{fim2} the irreducible components of $A_{\calk}$ must form a loop. Since the components of $A_{\calk}$ introduced in the course of the normalization procedure in question are rational curves contained in pairwise disjoint tree-like arrangements, the only way for the components of $A_{\calk}$ to form a loop arises from the existence of algebraic curves in the initial minimal set $\calk$. Thus Theorem~\ref{fim1} follows. The rest of this paper is ultimately devoted to the proof of Theorem~\ref{fim2} for the case $\calh=\calh^0$. The extension to $\calh^{\theta}$ for $\theta \in (-\pi/2, \pi/2)$ will be left to the reader. For this we assume that $\fol$, $\omega$, $\omega_1$ and so on, satisfy all the conditions in the statement of Lemma~\ref{revision2}. In particular the so-called ``local invariance condition'' of Section~3 concerning singularities of $\fol$ that belong to the Siegel domain is verified. Also, modulo fixing a neighborhood $\Ww$ of the singular set of $\fol$, holonomy maps of $\fol$ obtained by means of (segments of) deformed $\calh$-trajectories contained in the complement of $\Ww$ must satisfy the conclusions of Theorem~\ref{Totalversionofblm}. Besides $l^+$ and $\calk^0$ will always be as indicated above. Now we have: \begin{lema} \label{6lema2} We have $\calk^0 \cap {\rm Sing}\, (\fol) \neq \emptyset$. \end{lema} \noindent {\it Proof}\,: It is a simple application of Theorem~\ref{Totalversionofblm}. Suppose that the statement is false. Thus modulo reducing $\Ww$ we can assume that $\calk$ lies entirely in the complement of $\Ww$. Now consider a parametrization $c$ for a trajectory of $\calh$ such that $c(0) =p \in \calk^0$ (where $p$ does not belong to any critical region). Fix a local transverse section $\Sigma_p$ and a disc $B_p (r) \subset \Sigma_p$ as in item~1 of Theorem~\ref{Totalversionofblm}. Finally, for a fixed $t_0 \in \R_+$, let ${\rm Hol}\, (c_{t_0})$ be the holonomy map associated to the restriction of $c$ to $[0,t_0]$. By construction, ${\rm Hol} \, (c_{t_0})$ is defined on $B_p (r)$ for every $t_0$. On the other hand, since the leaves of $\calk$ are dense in $\calk^0$, there is a sequence of times $t_0^1 , t_0^2 ,\ldots$ going to infinity and such that $\{ c (t_0^i) \}$ converges to $p$ when $i \rightarrow \infty$. Because $\Sigma_p$ is not a transverse section for $\calh$ at $p$, we cannot ensure {\it a priori}\, that $c(t_0^i)$ can be chosen in $\Sigma_p$ for every $i \in \N$. Nonetheless, modulo performing a slight modification of the trajectories of $\calh$ on a neighborhood of $p$ (similar in spirit to the ``deformed'' trajectories arising from the first critical regions) this assumption can be made without loss of generality. Now, thanks to the second part of the statement of Theorem~\ref{Totalversionofblm}, it follows that the image of $B_p (r)$ under ${\rm Hol} (c_{t_0})$ is contained in a disc of radius $r/10$ about $c(t_0^i) \in \Sigma_p$ provided that $i$ is large enough. In other words, for $i$ very large ${\rm Hol} \, (c_{t_0})$ takes the disc $B_p (r)$ inside itself. This actually implies the existence of a loop with hyperbolic holonomy for $\fol$. As already seen, this gives a contradiction in the present case since $T$ is a diffuse, cf. Lemma~\ref{atomicmass}.\qed Recall that, in principle, singularities of $\fol$ lying in $\calk^0$ either are Siegel singularities (possibly associated to critical regions of second type) or are irrational foci as in Remark~\ref{localregionsandirrationalfoci}. The latter singularities are however necessarily avoided by the trajectories of $\calh$ as it was discussed in Section~5. Therefore irrational focus singularities can be ignored in our context and we can suppose that all singularities of $\fol$ lying in $\calk^0$ are, in fact, Siegel singularities. Let us now state a proposition that plays a key role in the proof of Theorem~\ref{fim2}. \begin{prop} \label{l+closed} The above mentioned trajectory l$^+$ and set $\calk^0$ can be chosen so that $l^+ \subset \calk^0$ is closed. \end{prop} In view of the assumption about minimality of $\calk^0$ with respect to $\calh$, the preceding proposition actually says that $\calk^0$ is reduced, in a suitable sense, to a closed trajectory $l^+$. On the other hand, recall that our definition of ``closed trajectory'' allows $l^+$ to pass through singularities of $\fol$ lying in $\calk^0$ (which are necessarily Siegel singularities as already pointed out). Indeed a closed trajectory must necessarily go through singularities of $\fol$ since otherwise it gives rise to a holonomy map of $\fol$ possessing a hyperbolic fixed point. As already seen this forces the current $T$ to be concentrated over an algebraic curve contradicting its diffuse nature. On the other hand, the fact that a closed trajectory goes through a singularity of $\fol$ implies the existence of at least one saddle connexion for $\fol$. In the rest of this section Proposition~\ref {l+closed} is going to be proved. In the next section we shall use it to derive the proof of Theorem~\ref{fim2} and of Theorem~B in the Introduction. As in Section~4, we consider the connected components $E=E_1, E_2, \ldots$ of the compact curves invariant by $\fol$ and contained in $\calk$. Hence each $E_i$ consists of a number of irreducible curves $D_{i_k} \subset \{ D_1, \ldots ,D_r\}$, Note that our terminology allows $E_i$ to be empty i.e. reduced to a Siegel singularity that does not belong to any compact curve invariant by $\fol$. To make the subsequent discussion more transparent we shall first consider the following special situation: \noindent {\bf First Case}: there is a unique connected component $E$. The general case can easily be deduced from our discussion as it will be shown at the end of the section. Let us then fix a singularity $p \in E \cap \calk^0$ (lying away from the critical regions). The next lemma allows us to suppose in addition that a segment of $l^+$ is contained in a local separatrix of $\fol$ at $p$. \begin{lema} \label{6lema3} There is a (deformed) semi-trajectoryof $\calh_{\calk^0}$ contained in a separatrix of $\fol$ and having $p$ as an accumulation point (where $\calh_{\calk^0}$ stands for the restriction of $\calh$ to $\calk^0$). This trajectory will still be denoted by $l^+$. \end{lema} \noindent {\it Proof}\,: Fix local coordinates $(u,v)$ around $p \simeq (0,0)$ in which the $1$-form $\omega$ defining $\fol$ satisfies Equation~(\ref{siegel2}). We choose $u,v$ so that the trajectories of $\calh$ in $\{ v=0 \}$ converge to $p \simeq (0,0)$. Next let $\Sigma_{\theta}$ be a local transverse section passing through the point $( e^{2\pi i \theta} ,0)$, $\theta \in [0, 2\pi)$. Since $p \in \calk^0$, Proposition~(\ref{3prop1}) implies the existence of a sequence of points $(\theta_i , v_i)$ such that $(e^{2\pi i \theta_i} , v_i) \in \calk^0$ and $\vert v_i \vert \rightarrow 0$. Since $\calk^0$ is closed, it follows the existence of $\theta_{\infty}$ such that $(e^{2\pi i \theta_{\infty}} ,0) \in \calk^0$. The trajectory of $\calh$ through this point then satisfies the required conditions.\qed \begin{obs} \label{vamos} {\rm Without loss of generality we can suppose that $\theta_{\infty} =0$ so that $(1,0) \in \calk^0$. We also set $\Sigma =\Sigma_0$ and denote by $l$ the trajectory of $\calh$ through $(1,0)$ which is obviously contained in $\calk^0$.} \end{obs} \begin{lema} \label{6lema3.5} $l^+$ is not entirely contained in $E$. \end{lema} \noindent {\it Proof}\,: Suppose for a contradiction that $l^+$ is entirely contained in $E$. Suppose that $D_1$ is the irreducible component of $E$ containing $p$. We can assume without loss of generality that $D_1$ contains the whole of $l^+$. Indeed, suppose that by following $l^+$ one passes from $D_1$ to another irreducible component $D_2$. This passage is then made through a singularity $q_{1,2} =D_1 \cap D_2$ which belongs to the Siegel domain. By assumption the orientation of the trajectories of $\calh$ around $q_{1,2}$ (always given by Lemma~\ref{3lema1}) is such that they go from the separatrix contained in $D_1$ to the separatrix contained in $D_2$. Hence the trajectory $l^+$ cannot return to $D_1$ through $q_{1,2}$. Because $D_1 \cap D_2 = q_{1,2}$, this trajectory cannot return to $D_1$ through any point in $D_2$. Since the ``graph of irreducible components'' associated to $E$ contains no loop, we conclude that $l^+$ will never return to $D_1$. In other words, if $l^+ \subset E$, then $l^+$ will eventually be ``captured'' by an irreducible component of $E$ that can be supposed to be $D_1$. On the other hand, the trajectory $l^+$ cannot approach any singularity $q \in D_1$ where the leaves of $\calh$ restricted to $D_1$ approach $q$. Otherwise $l^+$ would leave $D_1$ by means of the separatrix of $\fol$ at $q$ which is transverse to $D_1$. This, in fact, implies that the complement of a compact part of $l^+$ does not accumulate on any singularity. Since $D_1$ is compact and $l^+$ is of infinite length, Theorem~\ref{Totalversionofblm} can be employed to ensure that the holonomy group of $D_1 \setminus {\rm Sing}\, (\fol)$, w.r.t. the foliation $\fol$, contains a hyperbolic element. This is however impossible since it would force $T$ to be concentrated over $D_1$.\qed Recalling Remark~(\ref{vamos}), we can suppose that $l^+$ arrive to $E$ through $p$. In other words, around $p$ there is a separatrix of $\fol$ transverse to $E$ (given by $\{ v=0\}$ and denoted by $S_p$) in the local coordinates $(u,v)$ used in the proof of Lemma~(\ref{6lema3}), with $\{ u=0\} \subset E$. Since $E$ is a connected component of the set of all compact curves invariant by $\fol$ that are contained in $\calk$, it follows that this separatrix is contained neither in the divisor of zeros and poles of $\omega$ nor in $(\omega_1)_0$. Similarly, in coordinates $(w,t)$ around $q$, $\{ w=0\} \subset E$, there is a separatrix of $\fol$ at $q$ which is transverse to $E$ (given by $\{ t=0\}$ and denoted by $S_q$. Again this separatrix is not contained in $(\omega)_0 \cup (\omega)_{\infty} \cup (\omega_1)_0$. To prove Proposition~\ref {l+closed} let us suppose for a contradiction the existence of a $\calh$-trajectory $l^+$ in $\calk^0$ satisfying the above conditions but which is not a closed trajectory passing through singular points of $\fol$. Recall that $l^+$ accumulates on itself, i.e. it has non-trivial recurrence. By using the local coordinates $(u,v)$, $(w,t)$ introduced above, the non-trivial recurrence of $l^+$ implies the existence of points $(u_n ,v_n) = (e^{2\pi i \theta_n} ,v_n)$ satisfying the following: \begin{enumerate} \item $(u_n ,v_n) = (e^{2\pi i \theta_n} ,v_n)$ belongs to $l^+$ for every $n \in \N$. \item Both sequences $\{ \theta_n \}, \, \{ v_n \}$ converge to zero. \end{enumerate} Actually we can be more precise. Let $U_E$ be a small ``tubular'' neighborhood of $E$. Let us consider the ``full'' sequence of ``first returns'' of $l^+$ to $U_p$ which will still be denoted by $(e^{2\pi i \theta_n} ,v_n)$. If $U_E$ is appropriately chosen, then the local connected component of $l^+$ through $(e^{2\pi i \theta_n} ,v_n)$ satisfies the conclusions of Proposition~(\ref{3prop1}). We have: \noindent {\bf Claim}: We can assume that $\theta_n =0$ for every $n$. The above assumption is not really needed from a strict point of view. It simply allows us to shorten our discussion which applies equally well to the general case. It can also be formalized by again locally deforming the leaves of $\calh$ on a neighborhood of the circle $(e^{2\pi i \theta} ,0)$, $\theta \in [0,2\pi)$. This deformation is essentially given by the local holonomy of $\{ v=0\}$ and does not affect neither the global dynamics of $l^+$ nor the estimates involved in the holonomy of (compact pieces of) $l^+$. Summarizing, in what follows we assume that $l^+$ enters the ``tubular neighborhood'' $U_E$ by means of a sequence of points having the form $P_n= (1, v_n)$. Besides the sequence $\{ \vert v_n \vert \}$ converges to $0 \in \C$. In particular, these points belong to $\Sigma_{\rm in}$, a local transverse section through the point $(1,0)$ in $(u,v)$-coordinates. In the sequel we sometimes identify the point $(1,v) \in \Sigma_{\rm in}$ with the point $v \in \C$, thus identifying $\Sigma_{\rm in}$ itself with a neighborhood of $0 \in \C$. Let us briefly review what is the nature of the holonomy map of $\fol$ associated to a trajectory of $\calh$ as above. We begin with a definition on $U_E$ involving the generalized Dulac transform introduced in Section~4. It is clear that the segment of $l^+$ delimited by the points $p,q$ above verifies the condition discussed in Section~4 in connection with the generalized Dulac transform. Let then $\Sigma_{\rm out}$ be a transverse section through the point $(1,0)$ in $(w,t)$-coordinates (on a neighborhood of $q$) so that the corresponding generalized Dulac transform is well-defined. The last statement can be made precise as follows: Let $V_0 \subset \Sigma_p$ be a simply connected domain containing a point $(1,z) \in \calk^0 \cap \Sigma_{\rm in}$. According to Sections~3,~4, the oriented trajectory $l_{(1,z)}$ of $\calh$ through $(1,z)$ intersects $\Sigma_{\rm out}$ at a point $(z' ,1) \in \calk^0 \cap \Sigma_q$. Parameterizing by $c: [0,1] \rightarrow l_{(1,z)}$ the segment of $l_{(1,z)}$ delimited by $(1,z)$ and $( z' ,1)$, we ask the generalized Dulac transform ${\rm GDul} : V_0 \rightarrow \Sigma_{\rm out}$ to be well-defined w.r.t. the path $c$ (in the sense of Sections~3, 4). Obviously ${\rm Dul} (1,z) = ( z', 1)$. Modulo reducing $U_E$, we can suppose that $U_E$ is the saturation of $\Sigma_{\rm out}$ by $\fol$. Finally let $\lambda$ denote the exponent associated with this generalized Dulac transform as in the context of Proposition~\ref{4.5prop2}. Now consider the compact set $\calk^0 \setminus U_E$ where the foliation $\fol$ is regular. Here the holonomy associated to $\fol$ and to a segment of the trajectory $l^+$ contained in $\calk^0 \setminus U_E$ has a clear meaning. Besides if $c: [0,1] \rightarrow \calk^0 \setminus U_E$ is a path parameterizing a segment of $l^+$, then ${\rm Hol}\, (c)$ satisfies the conclusions of Theorem~\ref{Totalversionofblm}, in particular ${\rm Hol}\, (c)$ is defined on a transverse disc of uniform radius $\delta >0$ (regardless of the length of $c$). Resuming the notations of Proposition~(\ref{4.5prop2}), there are two cases to be considered according to whether or not the value of $\lambda$ is rational. \vspace{0.1cm} \noindent $\bullet$ Let us first suppose that $\lambda \in \R \setminus \Q$. Recall that $\lambda$ is less than or equal to~$1$. Let $l^+_{p,q}$ denote the segment of $l^+$ delimited by $p,q$. By choosing a point in this segment, we can consider the holonomy of $\fol$ generated by the local holonomy maps associated to the separatrices of the singularities of $\fol$ lying in $l^+_{p,q}$ (in particular $p,q$). This group is Abelian. Otherwise, a non-trivial commutator in this group would be ``parabolic'' in the sense that it is tangent to the identity. As in Lemma~(\ref{4.5lema2}), the existence of this local diffeomorphism would yield a contradiction since $l^+ \subset \calk$ and $T$ is diffuse having $\calk$ as its support. Clearly the local holonomy maps associated to the above mentioned singularities can be identified with elements of ${\rm Diff}\, (\C ,0)$ by appropriately choosing local transverse section and (ordinary) Dulac transforms. The fact that $\lambda \in \R \setminus \Q$ then implies the existence of a holonomy map $h$ in this group that, under the above identification, is an element of ${\rm Diff}\, (\C ,0)$ whose linear part is an irrational rotation. Then the following elementary statement holds: \noindent \textsc{Fact 1}: Given an arbitrary $l \in \N^{\ast}$, there is a constant $C = C(l ,2\pi \lambda)$ such that, for every $r \in \R_+$ sufficiently small and $z \in \C$ with $\vert z \vert = r$, the sets $B_z (C .r), h(B_z (C.r)), \ldots , h^{l-1} (B_z (C.r))$ are pairwise disjoint (where $B_z (C.r)$ stands for the ball about $z$ of radius $C.r$). Now consider a strictly monotone sequence $\{ r_j \} \subset \R_+$ converging to {\it zero}. For each $j$, denote by $B (r_j) \subset \Sigma_{\rm in}$ the ball of radius $r_j$ about $0 \simeq (1,0) \in \Sigma_{\rm in}$. Next recall that $(1, v_n) \in \Sigma$ is the sequence of the ``first returns'' of $l^+$ to $U_E$. \vspace{0.2cm} \noindent {\it Proof of Proposition~\ref{l+closed} when $\lambda \in \R \setminus \Q$}\,: We begin by fixing $l \in \N$ larger than $2\pi/ \lambda$. Next we consider a constant $C = C(l ,2\pi \lambda)$ as in Fact~1. Finally let us denote by $\mu ,\nu$ suitable measures representing the current $T$ on the transverse sections $\Sigma_{\rm in}, \Sigma_{\rm out}$ respectively. Consider a ball $B(r) \subset \Sigma_{\rm in}$ with $r$ very small. Let $V_{\alpha} (r) \subset B_{r} \subset \Sigma_p$ be a sector of angle $\alpha < 2\pi \lambda$ and radius $r_j$. The invariance of $\mu$ under the local diffeomorphism $h$ obtained as the holonomy map of $\{ v=0\}$ implies that, for every $\epsilon >0$ fixed, one has \begin{equation} \frac{\mu (V_{\alpha} (r))}{\mu (B(r))} \geq (1-\epsilon) \lambda \label{lafoi1} \end{equation} provided that $r$ is very small and that $\alpha$ is sufficiently close to $2\pi \lambda$. Now let us choose $j_0 \in \N$ very large and denote by $v_{n_0}$ the smallest positive integer $n$ such that $v_n \in B(r_{j_0})$. Set $r_0 = \vert v_{n_0} \vert$ and denote by $B(r_0)$ (resp. $B(2r_0)$) the ball of radius $r_0$ (resp. $2r_0$) about $0 \simeq (1,0) \in \Sigma_{\rm in}$. For $\alpha$ very close to $2\pi \lambda$, let us denote by $V_{\alpha} (2r_0)$ the sector of angle $\alpha$ and radius $2r_0$ which is divided into two equal parts by the semi-line joining $0$ to $v_n$. Modulo taking $r_0$ sufficiently small (i.e. $j_0$ large enough), we can consider a generalized Dulac transform ${\rm GDul}$ which is well-defined on $V_{\alpha} (2r_0)$ (w.r.t. some path $c$ fixed once and for all). The image $W(2r_0)$ of $V_{\alpha} (2r_0)$ under ${\rm GDul}$ is contained in a disk $B' (r_0') \subset \Sigma_{\rm out}$ of radius $r_0' \simeq r_0^{1/\lambda}$ (cf. Proposition~\ref{4.5prop2}). Again by choosing $r_0$ small enough, this estimate yields $r_0' < 2C r_0$ where $C$ is the constant fixed above. Also we know that $\nu(W(2r_0)) > (1- \epsilon) \mu (V_{\alpha} (2r_0))$. Finally let $c: [0,1] \rightarrow \calk^0$ be a parametrization of the segment of trajectory $l^+$ going from the first point in which $l^+$ intersects $\Sigma_q$ to the point $(1,v_{n_0}) \in \Sigma$. Note that $c$ remains away from the neighborhood $U_E$. In fact, its distance to $U_E$ is bounded below by a uniform constant times $r_0$. In particular, for $r_0$ small enough, this implies that the holonomy map ${\rm Hol}\, (c(t))$ is well-defined on $W(2r_0)$ for every $t \in [0,1]$. Besides Theorem~\ref{Totalversionofblm} also ensures that ${\rm Hol}\, (c) (W(2r_0))$ is contained in a disc of radius less than $2r_0'$ about $v_{n_0} \simeq (1,v_{n_0}) \in \Sigma$. Since $\vert v_{n_0} \vert =r_0$, this ensures that ${\rm Hol}\, (c) (W(2r_0)) \subset B(2r_0)$. Furthermore we have $\mu [{\rm Hol}\, (c) (W(2r_0))] = \nu (W(2r_0))$ since $\mu, \, \nu$ are local transverse measures representing the invariant current $T$. Thanks to Estimate~(\ref{lafoi1}), it follows that \begin{equation} \mu [{\rm Hol}\, (c) (W(2r_0))] \geq (1-\epsilon) \lambda \mu (B(2r_0)) \, . \label{lafoi2} \end{equation} Finally, since $r_0' < 2C r_0$, the set ${\rm Hol}\, (c) (W(2r_0))$ has $l$ images pairwise disjoint under the holonomy map $h$. Since all these images have the same $\mu$ measure, the total measure of their union is $l .\mu [{\rm Hol}\, (c) (W(2r_0))] > \mu (B (2r_0))$ in view of the choice of $l$ and for $\epsilon$ very small. This yields the desired contradiction since the union of these images is contained in $B (2r_0)$. Therefore our statement is proved in the present case.\qed \vspace{0.2cm} \noindent $\bullet$ Let us now suppose that $\lambda \in \Q$. This case is somehow similar to the preceding one. The main difference is that Fact~1 no longer holds. On the other hand, we know that $\fol$ is linearizable around every singularity in $l^+_{p,q}$ (Proposition~\ref{4.5prop1}). Yet the local holonomy maps associated to these singularities generate an Abelian group which is therefore finite and hence conjugate to a (finite) group of rotations. This will allow us to make precise asymptotic calculations so as to dispense with the ``$\epsilon$ margin'' involved in preceding discussion. In fact, standard arguments involving this group and the nature of the singularities of $\fol$ contained in $E$ shows that the restriction of $\fol$ to $U_E$ admits a non-constant holomorphic first integral (see \cite{mamo}, \cite{paul}). The existence of this integral however will not be necessary in what follows. Let us resume the notations of the case where $\lambda$ was not rational. Since the above mentioned holonomy group associated to the curve $L$ is conjugate to a finite group of rotations, we can find a local diffeomorphism $h$ in this group which is itself a rotation of angle $2\pi \lambda$. Let $V_{2\pi \lambda} (r) \subset \Sigma_{\rm in}$ be a sector of angle $2\pi \lambda$ and radius $r$. If $B (r) \subset \Sigma_p$ is the corresponding ball of radius $r$, we obviously have $\mu [V_{2\pi \lambda_2/\lambda_1} (r) ] = \lambda \mu (B(r))$. \vspace{0.2cm} \noindent {\it Proof of Proposition~\ref{l+closed} when $\lambda \in \Q$}\,: Again choose $j_0 \in \N$ very large and denote by $v_{n_0}$ the smallest positive integer $n$ such that $v_n \in B(r_{j_0})$. Set $r_0 = \vert v_{n_0} \vert$. Next let $V_{2\pi \lambda} (2r_0)$ be the sector of angle $2\pi \lambda$ and radius $2r_0$ which is divided into two equal parts by the semi-line joining $0$ to $v_n$. Modulo taking $r_0$ sufficiently small (i.e. $j_0$ large enough), we can consider a generalized Dulac transform ${\rm GDul}$ which is well-defined on $V_{2\pi \lambda} (2r_0)$ (w.r.t. some path $c$ fixed once and for all). The image $W(2r_0)$ of $V_{2\pi \lambda} (2r_0)$ under ${\rm GDul}$ is contained in a disc $B' (r_0') \subset \Sigma_{\rm out}$ of radius $r_0' \simeq r_0^{1/\lambda}$ (cf. Proposition~\ref{4.5prop2}). We note that the possibility of having $\lambda =1$ is not {\it a priori} excluded so that we first suppose $\lambda < 1$. As in the proof of the case ``$\lambda$ irrational'', the above mentioned disc is taken by the holonomy of $l^+$ to a set ${\rm Hol}\, (c) (W(2r_0)) \subset B(2r_0) \subset \Sigma_{\rm in}$. Furthermore we have $$ \lambda \mu (B(2r_0)) = \mu [V_{2\pi \lambda} (2r_0)] = \nu [W(2r_0)] = \mu [{\rm Hol}\, (c) (W(2r_0))] \, . $$ However the set ${\rm Hol}\, (c) (W(2r_0))$ has the ``denominator of $\lambda \in \Q$'' images pairwise disjoint under the iterations of the rational rotation $h$ of angle $2\pi \lambda$. Since they are all contained in $B(2r_0)$, it follows that $\lambda$ is the inverse of an integer. In this case $\mu [\bigcup_{i=1}^{1/\lambda} h^i ({\rm Hol}\, (c) (W(2r_0)))] = \mu (B (2r_0))$. Finally because $0 \simeq (1,0) \in \Sigma_{\rm in}$ is not contained in the closure of the set $\bigcup_{i=1}^{\lambda_1} h^i ({\rm Hol}\, (c) (W(2r_0)))$, it follows that a sufficiently small neighborhood of $0 \in \Sigma_{\rm in}$ has $\mu$-measure {\it zero}. Since $\calk^0$ is in the support of $[T]$, it must not intersect the neighborhood in question. This yields the desired contradiction since $\calk^0$ accumulates on $p$. Finally if we have $\lambda=1$, then ${\rm GDul}$ yields an identification of neighborhoods of the origin in $\Sigma_{\rm in}, \,\Sigma_{\rm out}$ which takes $\mu$ to $\nu$. In other words, the segments of $l^+$ passing through $U_E$ behaves as they remained ``away from the singular set of $\fol$''. In this case the conclusion follows simply from the contractive behavior of the holonomy of $\fol$ defined with the help of the segments of $l^+$ contained in the complement of $U_E$. The proof of Proposition~\ref{l+closed} in the case corresponding to the connectedness of $E$ is now over.\qed \vspace{0.1cm} \noindent {\it Proof of Proposition~\ref{l+closed} in the general case}\,: To finish this section, let us now show how the previous arguments can naturally be adapted to yield the proof of Proposition~\ref{l+closed} when $E$ contains more than one connected component. Suppose first that, instead of a single connected component $E$, there are two connected components $E_1, E_2$. We consider a trajectory $l^+$ of $\calh$ arriving to $E_1$ through a singularity $p_1$ and leaving $E_1$ through another singularity $q_1$ (as in Lemmas~\ref{6lema3} and~\ref{6lema3.5}). The first possibility that may occur is a ``saddle-connection'' between $q_1$ and a singularity $p_2 \in E_2$. More precisely, it may happen that the separatrix of $\fol$ at $q_1$ which is transverse to $E_1$ coincides with a separatrix of another singularity $p_2 \in E_2$ of $\fol$. Then $l^+$ will arrive to $E_2$ through $p_2$. The solution of this first difficulty is provided by the proof of Proposition~\ref{4.5prop2}. In fact, by using the leaf of $\fol$ joining $q_1$ to $p_2$, we can define a new ``generalized Dulac transform'' encompassing both $E_1, E_2$ as they were a single connected component. This definition is straightforward and the proof of Proposition~\ref{4.5prop2} shows that the resulting ``Dulac transform'' still satisfies the conditions given in the statement in question. Suppose now that there is no ``saddle-connection'' in the sense described above between $E_1 ,E_2$. The difficulty here arises from the fact that $l^+$ may accumulate on $E_2$ before returning to $E_1$. Again we suppose that $l^+$ arrives to $E_1$ (resp. $E_2$) through a singularity $p_1$ (resp. $p_2$) and leaves it through a singularity $q_1$ (resp. $q_2$). Let $\textsc{S}_{p_2}$ denote the separatrix of $\fol$ at $p_2$ which is transverse to $E_2$. Again we keep the notations used in the course of this section. Let us then consider the image $W(2r_0)$ of $V_{\alpha} (2r_0)$ under the generalized Dulac transform associated to $E_1$, ${\rm GDul}_1$. As $l^+$ continues from $q_1$ to $p_2$, we let ${\rm Hol} \, (W(2r_0))$ denote the image of $W(2r_0)$ by the corresponding holonomy map. The special difficulty here is that, when approaching $p_2$, $l^+$ may become ``very close'' to $\textsc{S}_{p_2}$. In particular, with the obvious identifications, it may happen that ${\rm Hol} \, (W(2r_0))$ contains $\textsc{S}_{p_2}$. This situation prevents us from considering the ``Dulac transform'' associated to $E_2$, ${\rm GDul}_2$, as being defined on all of ${\rm Hol} \, (W(2r_0))$. To deal with this case, we proceed as follows. First we note that, in the hard case, this phenomenon should occur for ``every sequence of returns'' of $l^+$ to the transverse section in which ${\rm GDul}_1$ is defined. Under this assumption, we substitute $l^+$ by a trajectory $l^+_2$ of $\calh$ which has properties analogues to those of $l^+$ and, furthermore, is contained in $\textsc{S}_{p_2}$. The existence of this trajectory is clear since $l^+$ accumulates on $\textsc{S}_{p_2}$. We then start our argument with $l^+_2$ so that the Dulac transform associated to $E_2$ is automatically well-defined. We claim that, for $l^+_2$, the Dulac transform associated to $E_1$ will also be well-defined on the appropriate domain. It is clear that the desired statement results easily from this claim. To check the claim, we note that $l^+$ is close to $l^+_2$ at the same order of the diameter of $W(2r_0)$ near $p_2$. In turn, this diameter is small when compared to $r_0$ (cf. Propostion~\ref{4.5prop2}). If $l^+, l^+_2$ remain close to each other for all time, then $l^+_2$ reaches the domain of definition of ${\rm GDul}_1$ at a point ``very close'' to a point of return of $l^+$ to this domain (i.e. the distance between these two points is small to order superior to the distance of the return point to $E_1$). Since the diameter of the corresponding $W(2r_0), \, {\rm Hol} \, (W(2r_0))$ is also small, the claim follows at once. This therefore completes the argument modulo the assumption that $l^+, l^+_2$ remain close to each other for all time. This assumption however can always be made. In fact, we first observe that the leaf $L_2$ of $\fol$ containing $l_2^+$ approaches the trajectory $l^+$ due to the contracting behavior of the holonomy along $l^+$. By a simple argument of continuous dependence for solutions of differential equations, $l^+, \, l_2^+$ remain close for an {\it a priori}\, fixed period of time. But at the end of this period of time, we can modify $l^+_2$ inside $L_2$ by adding a short line (transverse to $\calh$) so as to bring the modified trajectory $l_2^+$ close again to $l^+$. This new trajectory $l_2^+$ satisfies all the previous requirements and establishes the claim. Now it is clear that the existence of several $E_1, E_2, \ldots$ connected components does not pose any new intrinsic difficulty. The proof of Proposition~\ref{l+closed} is finally completed.\qed \section{Proofs for the main results} To be able to prove the theorems stated in the Introduction, we shall need to consider the closed trajectory $l^+$ whose existence is ensured by Proposition~\ref{l+closed}. This trajectory will be referred to as a {\it singular}\, closed trajectory since it passes through the singularities of $\fol$. In what follows, we shall keep the notations and the terminology of the preceding section. Denote by $L_0, L_1, \ldots, L_m$ the leaves of $\fol$ that contain a non-trivial segment of $l^+$. For $i=0,\ldots, m-1$, $L_i$ intersects $L_{i+1}$ at a singularity of $\fol$ belonging to the Siegel domain. Also $L_k$ intersects $L_0$ at a Siegel singularity so that the leaves $L_0, L_1, \ldots, L_m$ form a loop by means of their saddle-connexions. Fix a base point $p \in l^+ \cap L_0$ and consider a local transverse section $\Sigma$ at $p$. To prove our main results we are going to consider the pseudogroup $\Gamma$ of transformations of $\Sigma$ obtained by the collection of first return maps over paths contained in the leaves $L_0, L_1, \ldots, L_m$. More generally denote by $\mathcal{L}$ the union of the (finitely many) leaves de $\fol$ defined by the following rules: \begin{itemize} \item $L_0, L_1, \ldots, L_m$ belongs to $\mathcal{L}$. \item If $L$ belongs to $\mathcal{L}$ and $L$ (locally) defines a separatrix for a Siegel singularity $p$ of $\fol$, then the global leaf obtained from the other separatrix of $\fol$ at $p$ must also belong to $\mathcal{L}$. \end{itemize} \noindent The pseudogroup $\Gamma$ is then obtained by means of first return maps defined over all paths contained in $\mathcal{L}$. One first element $f$ of $\Gamma$ corresponds of course to the singular loop $l^+$. As already seen, we can suppose that $f$ is a (ramified) map of the form $f(z) = z^{\lambda} (1 + u(z))$ where $\lambda > 1$ and $u(0)=0$ ($u$ being defined on a neighborhood of $0 \in \C$). Note that $\lambda$ need not be an integer so that $f$ should be thought of as a ``ramified'' map. Yet, in sectors of angle slightly smaller than $2\pi /\lambda$, $f$ is well-defined and one-to-one onto its image. Another element of $\Gamma$, denoted by $g$, corresponds to the local holonomy map arising from the singularities of $L_0, \ldots, L_m$. Observe that at least one of the Siegel singularities of $\fol$ lying in $l^+$ must have eigenvalues different from $1,-1$. In fact, otherwise $f$ would have no ramification and thus it would consist of a hyperbolic contraction defined on a neighborhood of $p \in \Sigma$. Therefore every invariant measure on $\Sigma$ would automatically be concentrated at $p$ what it is impossible. Summarizing, we conclude that $\Gamma$ contains an element $g$ defined about $p \simeq 0 \in \C$ and having the form $$ g(z) = e^{2\pi i \alpha} z + {\rm h.o.t.} $$ with $\alpha \in (0,1)$. This leads us to study the dynamics of a pseudogroup $\Gamma$ containing elements $f,g$ as above. Recall that $\lambda >1$. If $\lambda$ were an integer, then the classical theorem of B\"ottcher would provide coordinates where $f(z) =z^{\lambda}$. Hovewer, in general, the map $f$ is ramified so that it is well-defined only on suitable sectors. The ``position'' of the sectors where we want to define a particular determination of $f$ are naturally permuted by means of the local holonomy maps associated to the Siegel singularities of $\fol$ lying in $l^+$. Indeed, the ambiguity in the definition of $f$ as compositions of ordinary holonomy maps and suitable Dulac transforms is precisely codified by the local holonomy maps arising from the mentioned singularities. In particular two different determinations of $f$ commute in the obvious sense with the corresponding local holonomy maps. These elementary facts will freely be used in what follows. Now, even though $\lambda$ is not an integer, the method of B\"ottcher still provides a conjugacy between $f$ and $z \mapsto z^{\lambda}$ over appropriate sectors. Whereas the conjugacy map is clearly not defined about $0 \in \C$, it has all natural asymptotic properties at $0 \in \C$. By using one of these coordinates, we can suppose that $\Gamma$ contains the map $f(z) =z^{\lambda}$ along with at least one map $g$ of the form $g(z) = e^{2\pi i \alpha} z +r(z)$ where $\alpha \in (0,1)$ and $\Vert r(z) \Vert \leq C \Vert z \Vert^2$ for some constant $C$. Furthermore different determinations of $f$ are naturally permuted by $g$. Before continuing let us make some elementary remarks about the function ``$k^{\rm th}$-root''. More precisely let $k \in \R$, $k > 1$, be fixed. Consider the map $z \longmapsto (1 + z)^k$ which is well-defined for $\Vert z \Vert < 1/2$. The corresponding derivative is simply $(1+z)^{(1-k)/k} /k$. In particular, for $\Vert z \Vert < 1/2$, the norm of its derivative is uniformly bounded by \begin{equation} \frac{1}{k} 2^{(k -1)/k} \leq \frac{2}{k} \, . \label{elementarybound} \end{equation} Next consider the element $h_1$ of $\Gamma$ defined by $h_1 (z) = f^{-1} \circ g \circ f(z)$ and note that $h_1$ is well-defined on a uniform sector (slightly smaller than the sector in which $f$ was defined). More generally different determinations of $h_1$ are naturally permuted by $g$ since so are the determinations of $f$. Similarly we define $$ h_n (z) = f^{-n} \circ g \circ f^n(z) \; . $$ Our first task is to show that the elements $h_n$ are defined on a uniform domain and that they converge to the identity on this domain. For this let us set $g(z) = e^{2\pi i \alpha} z + c_2 z^2 + c_3 z^3 + \cdots$. Now note that $h_1$ admits the form $$ h_1 (z) = e^{2\pi i \alpha} z (1 + c_2z^{\lambda} + c_3 z^{2\lambda} + \cdots)^{1/\lambda} \; . $$ The expression $c_2z^{\lambda} + c_3 z^{2\lambda} + \cdots = r(z^{\lambda})/z^{\lambda}$ is clearly less than $1/2$ for $\Vert z \Vert$ sufficiently small. In particular $h_1 (z)$ is actually holomorphic on a neighborhood of $0 \in \C$. In addition, Estimate~\ref{elementarybound} yields \begin{equation} \Vert h_1 (z) - e^{2\pi i \alpha/\lambda} z \Vert \leq \frac{2}{\lambda} C \Vert z \Vert^{\lambda} \label{elementarybound2} \end{equation} on the same domain. A direct inspection shows that $h_n (z)$ is holomorphic on the same neighborhood of $0 \in \C$ and that it satisfies the following estimate \begin{equation} \Vert h_n (z) - e^{2\pi i \alpha/\lambda^n} z \Vert \leq \frac{2}{\lambda^n} C \Vert z \Vert^{\lambda^{2n-1}} \; .\label{elementarybound3} \end{equation} Because $\lambda >1$, we obtain \begin{lema} \label{lemma7.A} For $\tau >0$ sufficiently small, all the $h_n$ are holomorphic and well-defined on the disc $B_{\tau} (0)$ of radius $\tau$ about $0 \in \C$. Furthermore these diffeomorphisms converge uniformly to the identity on $B_{\tau} (0)$.\qed \end{lema} Recall that a vector field $Y$ defined on a neighborhood $U$ of $0 \in \C$ is said to {\it belong to the closure of $\Gamma$ (relative to $U$)}\, if for every $V \subset U \subset \C$ and $t_0 \in \R_+$ so that $\phi_Y^{t} (V)$ is well-defined for all $0 \leq t \leq t_0$, the map $\phi_Y^{t_0} : U \rightarrow \phi_Y^{t_0} (U) \subset U$ is a uniform limit of elements of $\Gamma$ defined on $V$. Here $\phi_Y^{t}$ stands for the local flow generated by $Y$. From the definition it follows that $\phi_Y^{t_0}$ is holomorphic as a uniform limit of holomorphic maps (contained in $\Gamma$). Next we have: \begin{prop} \label{prop7.A} The vector field whose local flow consists of rotations about $0 \in \C$ belongs to the closure of $\Gamma$ (relative to the disc $B_{\tau/2} (0)$). In other words, every rotation $R_{\beta} : z \mapsto e^{2\pi i \beta} z$ is a uniform limit on $B_{\tau/2} (0)$ of actual elements of $\Gamma$. \end{prop} \noindent {\it Proof}. Fix a rotation $R_{\beta} (z) = e^{2\pi i \beta} z$. We need to find a sequence of elements in $\Gamma$ that approximate $R_{\beta}$ on $B_{\tau/2} (0)$. This sequence can explicitly be obtained as follows. For $n$ large enough let $k_n$ be the integral part of $\beta \lambda^n/\alpha$. Clearly the linear part of $h_n$ at $0 \in \C$ is a rotation of angle $[\beta \lambda^n/\alpha]\alpha/\lambda^n = k_n \alpha/\lambda^n$. In particular the difference $\vert \beta - k_n \alpha/\lambda^n \vert$ is bounded by $\alpha/\lambda^n$ which, in turn, tends to zero when $n \rightarrow \infty$ (since $\lambda >1$). Therefore to establish the proposition it suffices to check that the sequence $\{ h_n^{k_n}\}_{n \in \N} \subset \Gamma$ satisfies the two conditions below. \begin{enumerate} \item For $n$ very large, $h_n^{k_n}$ is well-defined on $B_{\tau/2} (0)$. \item On $B_{\tau/2} (0)$, $h_n^{k_n}$ converges uniformly towards its own linear part at $0 \in \C$. \end{enumerate} These conditions will simultaneously be verified as consequences of Estimate~(\ref{elementarybound3}). To abridge notations, denote by $R_n$ the rotation of angle $\alpha/\lambda^n$ about $0 \in \C$. The linear character of $R_n$ gives $D_zR_n =R_n$ for every $z \in \C$. In particular the norm $\Vert D_zR_n \Vert$ is constant equal to~$1$. Next observe that, for $\Vert z \Vert$ sufficiently small, Estimate~(\ref{elementarybound3}) yields \begin{eqnarray*} \Vert h_n^2 (z) - R_n^2 (z) \Vert & = & \Vert h_n^2 (z) - R_n \circ h_n (z) + R_n \circ h_n (z) - R_n^2 (z) \Vert \\ & \leq & \Vert (h_n - R_n) \circ h_n (z) \Vert + \Vert R_n (h_n (z)) -R_n (R_n (z)) \Vert \\ & \leq & \frac{2C}{\lambda^n} \Vert h_n (z)\Vert^{\lambda^{2n-1}} + \sup_{B_{\tau/2} (0)} \Vert DR_n\Vert . \Vert h_n (z) - R_n (z) \Vert \\ & \leq & \frac{2C}{\lambda^n} \Vert h_n (z)\Vert^{\lambda^{2n-1}} + \frac{2C}{\lambda^n} \Vert z \Vert^{\lambda^{2n-1}} \\ & = & \frac{2C}{\lambda^n} (\Vert h_n (z)\Vert^{\lambda^{2n-1}} + \Vert z \Vert^{\lambda^{2n-1}}) \, . \end{eqnarray*} If we set $\Vert h_n^3 (z) -R_n^3 (z) \Vert \leq \Vert h_n (h_n^2(z)) - R_n (h_n^2 (z)) \Vert + \Vert R_n (h_n^2 (z)) -R_n (R_n^2(z)) \Vert$ and repeat the above procedure, it follows that $$ \Vert h_n^3 (z) -R_n^3 (z) \Vert \leq \frac{2C}{\lambda^n} \Vert h_n^2 \Vert^{\lambda^{2n-1}} + \frac{2C}{\lambda^n} (\Vert h_n (z)\Vert^{\lambda^{2n-1}} + \Vert z \Vert^{\lambda^{2n-1}}) \; . $$ By induction, if $l$ is such that all the iterates $h_n (z), h_n^2 (z), \ldots , h_n^{l-1} (z)$ remain in the disc of radius $\tau$ for every $z$ with $\Vert z \Vert < \tau/2$, we derive the following estimate \begin{eqnarray} \Vert h_n^l (z) -R_n^l (z) \Vert & \leq & \frac{2C}{\lambda^n} \left( \Vert z \Vert^{\lambda^{2n-1}} + \Vert h_n (z)\Vert^{\lambda^{2n-1}} + \cdots + \Vert h_n^{l-1} (z)\Vert^{\lambda^{2n-1}} \right) \label{elementarybound4} \\ & \leq & \frac{2lC}{\lambda^n} \Vert \tau \Vert^{\lambda^{2n-1}} \; . \label{elementarybound5} \end{eqnarray} For $\tau> 0$ small and fixed, we see that $l > k_n$. In fact, since $k_n < \beta \lambda^n/\alpha$, we obtain for $n$ sufficiently large $$ \Vert h_n^{k_n} (z) -R_n^{k_n} (z) \Vert \leq \frac{2\beta C}{\alpha} \Vert \tau \Vert^{\lambda^{2n-1}} \leq \frac{\tau}{2} \; . $$ Furthermore $\Vert R_n^{k_n} (z) \Vert < \tau/2$ provided that $\Vert z \Vert < \tau/2$. This remark combines with the preceding estimate to guarantee that $h_n^{k_n}$ is well-defined on $B_{\tau/2} (0)$ for $n$ large as above. Since the right hand side in~(\ref{elementarybound5}) tends to zero as $n \rightarrow \infty$ ($\lambda > 1$), we can also conclude that $\{ h_n^{k_n} \}$ converges uniformly towards $R_n^{k_n}$ on $B_{\tau/2} (0)$. This finishes the proof of the proposition.\qed Let us denote by $\overline{\Gamma}$ the closure of $\Gamma$ (relative to $B_{\tau/2} (0)$). Naturally the contents of Proposition~(\ref{prop7.A}) is that all the rotations $R_{\beta} = e^{2\pi i \beta} z$ belong to $\overline{\Gamma}$. With this information in hand, let us go back to our original setting where $\Gamma$ is supposed to preserve a measure $\mu$ on $B_{\tau/2} (0)$ which, in addition, has no Dirac components. It is immediate to check that $\mu$ must also be preserved by all elements lying in $\overline{\Gamma}$. In particular $\mu$ is preserved by the group of rotations $z \mapsto e^{2\pi i \beta} z$. Consider polar coordinates $r,\theta$ for $B_{\tau/2} (0)$. Since the only measures on the circle that are preserved by the group of rotations are the constant multiples of the Haar measure, Fubini's theorem provides: \begin{lema} \label{lemma7.B} The measure $\mu$ is given in polar coordinates by $T(r) drd\theta$ where $T$ is naturally identified with a $1$-dimensional distribution.\qed \end{lema} Clearly all measures $\mu$ having the form indicated in the preceding lemma are automatically invariant by the group of rotations. The fact that $\mu$ is also preserved by $f(z) =z^{\lambda}$ can then be translated into the functional equation \begin{equation} T (r) = \lambda^2 r^{\lambda -1} T(r^{\lambda}) \; . \label{elementarybound6} \end{equation} We are now able to prove Theorem~A. \vspace{0.2cm} \noindent {\it Proof of Theorem~A}. Recall that $l^+$ is contained in a closed set $\calk$ that is minimal for $\fol$. Let us point out that the condition of having $\calk$ minimal has not been used so far. This condition however is going to play a role in the sequel. To prove the theorem we are going to show that $\calk$ is itself an algebraic curve. To do this consider the collection $\mathcal{L}$ of leaves of $\fol$ as defined in the beginning of this section. Clearly we have $l^+ \subset \mathcal{L}$. Next denote by $\overline{\mathcal{L}}$ the closure of $\mathcal{L}$ and consider the dimension of the set $\overline{\mathcal{L}} \setminus \mathcal{L}$ of the proper accumulation points of $\mathcal{L}$. According to the classical Remmert-Stein theorem, if the codimension of $\overline{\mathcal{L}} \setminus \mathcal{L}$ is at least two, then $\overline{\mathcal{L}}$ is itself an analytic set so that the statement follows at once. Thus we can suppose that the codimension of $\overline{\mathcal{L}} \setminus \mathcal{L}$ is strictly less than two. In particular $\overline{\mathcal{L}} \setminus \mathcal{L}$ cannot be contained in the singular set of $\fol$. Thus we can consider a point $p \in \overline{\mathcal{L}} \setminus \mathcal{L}$ that is regular for $\fol$. By considering a plaque of $\fol$ containing $p$, we see that $\mathcal{L}$ must non-trivially accumulate on this plaque. Since $\calk$ is minimal, it then follows that $\mathcal{L}$ has non-trivial recurrence. In other words, on $\Sigma$ (identified with the disc $B_{\tau} (0) \subset \C$), we can consider a point $q \in B_{\tau /2} (0)$, $q \neq 0$, belonging to $\mathcal{L}$. To establish the statement we shall derive a contradiction from the preceding with the fact that $q$ belongs to the support of the (transverse) invariant measure $\mu$. To do this, consider the pseudogroup $\Gamma'$ of first return maps defined over paths in $\mathcal{L}$ but based at $q$. The group $\Gamma'$ is conjugate to the group $\Gamma$. The desired contradiction arises as follows. Recall that the structure of $\mu$ on $B_{\tau /2} (0)$ was already clarified by Lemma~\ref{lemma7.B} and Equation~(\ref{elementarybound6}). Because $\Gamma'$ is conjugate to $\Gamma$, the analogous conclusions have to apply to a neighborhood of $q$ as well. In particular $\mu$ is ``constant'' over suitable closed loops about $q$. Since $\mu$ is also ``constant'' over the initial circles about $0 \in B_{\tau/2} (0) \subset \C$, it follows that $\mu$ should be ``constant'' on a neighborhood of $q$, i.e. on a neighborhood of $q$ the measure $\mu$ must be a constant multiple of the Lebesgue measure. This however contradicts the analogous of Equation~(\ref{elementarybound6}) corresponding to the point $q$. The theorem is proved.\qed Let us close this paper with the proof of Theorem~B. The method employed here is to large extent borrowed from \cite{paul} to which we refer for further details. The prototype of an equation admitting a Liouvillean first integral (integrable in the sense of Liouville) is the $1$-dimensional equation $y' = a(x) y + b(x)$ for which an explicit solution involving two integrals can be obtained. In the complex domain, these integrals are in general multivalued so that, loosely speaking, we can say that the equation admits a first integral that is ``twice multivalued''. Let us make this notion precise. Keeping the preceding notations, let us denote by $\calk$ the algebraic curve obtained from Theorem~A. In particular $\calk$ coincides with $\overline{\mathcal{L}}$ where $\mathcal{L}$ was defined in the beginning of the section. In the sequel consider meromorphic $1$-forms $\eta$ inducing $\fol$ but being defined only on a neighborhood of $\calk$ in $M$. So, unlike the previously used form $\omega$, $\eta$ need not be globally defined. Consider also a collection of local representatives $\{ (U_a, \eta_a) \}$ for $\eta$ on a neighborhood of $\calk$. The compatibility condition among the local representatives $\eta_a$ being given by the condition $\eta_a = u_{ab} \eta_b$ where $u_{ab} \in \mathcal{O}^{\ast} (U_a \cap U_b)$. A {\it holomorphic integrating factor}\, for $\fol$ on the mentioned neighborhood consists of a collection $\{ g_a\}$ of holomorphic functions, $g_a =u_{ab} g_b$, vanishing on $\calk$ and verifying $$ d\left( \frac{\eta_a}{g_a} \right) =0 \; . $$ The conditions above ensure that the local forms $\eta_a/g_a$ can be glued together to yield a closed meromorphic form defining $\fol$ on a neighborhood of $\calk$. Therefore every primitive $H$ of the latter global form produces a multivalued first integral for $\fol$, in fact, one has $\eta_a \wedge dH =0$ for all $a$. A Liouvillean first integral for $\fol$ on a neighborhood of $\calk$ as above consists of going one step further into the preceding discussion. A natural definition taken from \cite{paul} is as follows. Consider the universal covering $\Pi: \mathcal{U} \rightarrow M \setminus \calk$ of $M \setminus \calk$. The sheaf $\mathcal{O}_{\mathcal{U}}$ induces a sheaf over $M$ corresponding to its direct image by $\Pi$ and by the natural inclusion $M \setminus \calk \hookrightarrow M$. The restriction of the latter sheaf to $\calk$ is going to be denoted by $\widetilde{\mathcal{O}}$. By construction an element belonging to the fiber of $\widetilde{\mathcal{O}}$ over a point $q \in \calk$ is represented by a holomorphic function on $\Pi^{-1} (V \setminus V \cap \calk)$ where $V$ stands for a neighborhood of $q \in M$. The property of unique lift of functions through $\Pi$ allows us to identify $\mathcal{O}_{\calk}$ to a subset of $\widetilde{\mathcal{O}}$ whose elements are, in addition, invariant by the local covering automorphisms. With analogous constructions, we also define over $\calk$ the sheaves of (germs of) multivalued vector fields/holomorphic forms. Clearly the exterior differential $d$ can naturally be lifted to all above mentioned sheaves. An element $H$ of $\widetilde{\mathcal{O}} (V)$ is said to be a {\it primitive}\, if $df$ is a $1$-form invariant by the local covering automorphisms and admitting a meromorphic extension to $\calk$. Similarly $H \in \widetilde{\mathcal{O}}$ is said to be an {\it exponential of primitive}\, if $df/f$ is a $1$-form invariant by the local covering automorphisms and admitting a meromorphic extension to $\calk$. Let $\mathcal{S}^+ (V)$ (resp. $\mathcal{S}^{\times} (V)$) be the additive (resp. multiplicative) subgroup of primitives (resp. exponential of primitive) of $\widetilde{\mathcal{O}} (V)$. The {\it first Liouvillean extension}\, of $\mathcal{O} (V)$, denoted by $\mathcal{S}^1 (V)$ is the subring of $\widetilde{\mathcal{O}} (V)$ generated by $\mathcal{S}^+ (V), \, \mathcal{S}^{\times} (V)$. The resulting presheaf turns out to be a sheaf over $\calk$. This construction can be continued by induction to yield higher order Liouvillean extensions of $\mathcal{O} (V)$ but we shall not need those here (see \cite{paul}). A Liouvillean (or $1$-Liouvillean) integrating factor for $\fol$ on a neighborhood of $\calk$ consists of a collection $\{ g_a\}$ of elements in $\mathcal{S}^1 (U_a)$ such that $$ g_a = u_{ab} g_b \; \; \; \, {\rm and} \; \; \; \, d \left( \frac{\eta_a}{g_a} \right) = 0 \; . $$ A Liouvillean integrating factor is called {\it distinguished}\, if $g_a$ belongs to $\mathcal{S}^{\times} (V)$, ie. if $dg_a /g_a$ is a meromorphic closed form. By using this terminology we can state a slightly more accurate version of Theorem~B. \begin{teo} \label{strengthenedTheoremB} Under the assumptions of Theorem~B the foliation$\fol$ admits a distinguished integrating factor. More precisely $\fol$ is given by a (Liouvillean) meromorphic closed form of type $dg_a/g_a$ where $g_a \in \mathcal{S}^{\times} (V)$. \end{teo} To prove Theorem~\ref{strengthenedTheoremB} consider the local transverse $\Sigma$ through $p \in l^+$ along with the pseudogroup $\Gamma$ of first return maps over paths contained in $\mathcal{L}$ (recalling that $\overline{\mathcal{L}} = \calk$). Recall that $\Sigma$ is endowed with a coordinate $z$ in which the first return $f$ over $l^+$ becomes $f(z) =z^{\lambda}$ on suitable sectors. The new ingredient leading to the proof of the mentioned theorem is the following proposition. \begin{prop} \label{projectiveinvariance} The vector field $\mathcal{X} = z \partial/\partial z$ is projectively invariant by $\Gamma$. In other words, if $h \in \Gamma$ then $$ h^{\ast} \mathcal{X} = c_h \mathcal{X} $$ for a constant $c_h$ and whenever both sides are defined. \end{prop} To not interrupt the discussion we shall prove this proposition later. In the sequel we shall derive Theorem~\ref{strengthenedTheoremB}. To make the argument more transparent, suppose first that $\mathcal{Y}$ were a vector field on $\Sigma$ fully invariant by $\Gamma$, ie. satisfying $h^{\ast} \mathcal{Y} = \mathcal{Y}$ for every $h \in \Gamma$. If this vector field exists, then it induces a vector field (or rather a $1$-parameter subgroup of automorphisms) on the leaf space of $\fol$ (restricted to a neighborhood of $\calk$ as it will always be the case in what follows). More precisely, on a neighborhood of $\calk$ consider the sheaf $\Theta_{M \fol}$ consisting of germs of holomorphic vector fields tangent to $\calk$ and preserving the foliation $\fol$. If $\mathcal{Z}(V)$ is an element of $\Theta_{M \fol}$ then it verifies $L_{\mathcal{Z}(V)} \eta_a \wedge \eta_a=0$, where $L_{\mathcal{Z}(V)}$ denotes the Lie derivative, as a consequence of the fact that $\mathcal{Z}(V)$ preserves $\fol$. Similarly we denote by ${\rm Tang}_{M \fol}$ the subsheaf of $\Theta_{M \fol}$ constituted by those germs of vector fields that are tangent to $\fol$. The sheaf ${\rm Symm}_{M \fol}$ of ``symmetries'' of $\fol$ is then defined by means of the following exact sequence $$ 0 \longrightarrow {\rm Tang}_{M \fol} \longrightarrow \Theta_{M \fol} \longrightarrow {\rm Symm}_{M \fol} \longrightarrow 0 \, . $$ With this terminology, it is clear that the vector field $\mathcal{Y}$ defined on $\Sigma$ and invariant by the holonomy of $\fol$ naturally induces a section of ${\rm Symm}_{M \fol}$ which is still denoted by $\mathcal{Y}$. Next let a function $g_a$ be defined on each open set $U_a$ by the equation $g_a = \eta_a (\mathcal{Y})$. Because $\mathcal{Y}$ is identified with a (global) section of ${\rm Symm}_{M \fol}$, it follows that $g_a =u_{ab} g_b$ so that the collection $\{ g_a\}$ forms a holomorphic integrating factor for $\fol$. In fact, the condition $L_{\mathcal{Z}(V)} \eta_a \wedge \eta_a=0$ combines with the Cartan formula $L_{\mathcal{Y}} =di_{\mathcal{Y}} + i_{\mathcal{Y}} d$ to yield $$ d \left( \frac{\eta_a}{g_a} \right) = 0 \, . $$ In other words, the collection of $1$-forms $\{ \eta_a/g_a\}$ defines a closed meromorphic form $\eta$ defined on a neighborhood of the {\it regular part} of $\calk$ in $M$. However, the extension of this form to the singular points of $\fol$ lying in $\calk$ poses no difficulties since all these singularities must have two non-vanishing real eigenvalues as a consequence of the discussion in Section~4 (cf. also \cite{paul}). Finally a multivalued meromorphic first integral for $\fol$ can be obtained by means of the (multivalued) integral $$ \int \eta \; . $$ In particular it follows that the ambiguity in the definition of the mentioned first integral is precisely determined by the group of periods of $\eta$. \vspace{0.2cm} \noindent {\it Proof of Theorem~\ref{strengthenedTheoremB}}. Since a vector field $\mathcal{Y}$ on $\Sigma$ that is invariant by the pseudogroup $\Gamma$ need not exist, we shall generalize the preceding discussion to the projectively invariant vector field $\mathcal{X}$ whose existence in ensured by Proposition~\ref{projectiveinvariance}. Unlike the previous discussion, we are now going to exploit the fact that $\fol$ is given by a globally defined meromorphic form $\omega$. Thus, if we consider the collection of local forms $\{ \eta_a\}$ obtained from the restrictions of $\omega$, we conclude that the transition functions $u_{ab}$ are all constant equal to~$1$. Now on each open set $U_a$ we consider a projectively invariant vector field $\mathcal{X}_a$. Thanks to Proposition~\ref{projectiveinvariance} this collection of vector fields can be chosen so that $\mathcal{X}_a =c_{ab} \mathcal{X}_b$ where all the (transition) functions $c_{ab}$ are constant. Again on the collection of open sets $\{ U_a\}$, we define the local functions $g_a = \eta (\mathcal{X}_a )$. Thus $g_a =c_{ab} g_b$. Now on each $U_a$ consider the (local) meromorphic $1$-form $\Omega_{1a} = dg_a/g_a$ (with simple poles over $\calk$). Since $g_a =c_{ab} g_b$ we conclude that $\Omega_{1a} = \Omega_{1b}$ so that these local forms glue together into a meromorphic form $\Omega_1$ defined on a neighborhood of $\calk$. It is straightforward to check that the following relations hold: $$ \eta_a \wedge \Omega_{1a} = d\eta_a \; \; \; \, {\rm and} \; \; \; \, d\Omega_{1a} = 0 \, . $$ In turn these relations are equivalent to the fact that the local functions $\{ g_a\}$ provide a distinguished Liouvillean integrating factor for $\fol$ on a neighborhood of $\calk$, since $u_{ab} =1$ and $\Omega_{1a} = \Omega_{1b}$ cf. \cite{paul}. For the same reason mentioned above the extension of these factors to the singularities of $\fol$ lying in $\calk$ poses no additional difficulty. The theorem is proved.\qed The reader will note that the condition $d\omega = \omega \wedge \Omega_1$ means that the restriction of $\Omega_1$ to the leaves of $\fol$ actually coincides with the foliated form $\omega_1$. As before we can use the collection $\{ g_a\}$ to obtain a multivalued meromorphic first integral for $\fol$. For this we consider the (multivalued) integrating factor $h_a = \int (\eta_a /g_a)$. Since the monodromy of the collection $\{ (U_a, g_a\}$ amounts to multiplication by a constant ($g_a =c_{ab} g_b$), the same type of monodromy acts on the collection $\{ h_a\}$ by affine transformations. Using these multivalued functions $\{ h_a \}$ we can define a (further multivalued) first integral analogous to the one found in the first case discussed above. In slightly vague terms, the resulting first integral is given by $$ \int \frac{\omega}{\int \Omega_1} \, . $$ In particular the ambiguity in the definition of the first integral above lies in the period groups of $\omega$ and $\Omega_1$. Let us now prove Proposition~\ref{projectiveinvariance}. \vspace{0.2cm} \noindent {\it Proof of Proposition~\ref{projectiveinvariance}}. Denote by $\phi_{\mathcal{X}}$ (resp. $\phi_{i\mathcal{X}}$) the real (resp. purely imaginary) flow generated by $\mathcal{X}$. Note that $\phi_{i\mathcal{X}}$ is the real $1$-parameter group consisting of rotations about $0 \in \C$. Proposition~\ref{prop7.A} then says that $\phi_{i\mathcal{X}}$ is contained in $\overline{\Gamma}$. Next suppose for a contradiction that $h \in \Gamma$ does not preserve $\mathcal{X}$ up to a multiplicative constant. By construction $\Gamma$ admits a finite generating set whose elements either are defined on a neighborhood of $0 \in \C$ or are defined on an appropriate sector $W$ (with vertex at $0 \in \C$). In the latter case, the element in question is holomorphic on $W$ and of the form $z \mapsto z^a (1 + u(z))$ where $a \in \C$ and $u$ is defined on a neighborhood of $0 \in \C$ with $u(0) =0$. In fact, the generators of $\Gamma$ that are not defined on a neighborhood of $0 \in \C$ are obtained by means of singular loops passing through Siegel singularities of $\fol$ in $\calk$ what leads to the general form mentioned above. Among these elements we have $f(z) = z^{\lambda}$ with $\lambda > 1$. In view of the preceding we can suppose without loss of generality that either $h$ is defined on a neighborhood of $0 \in \C$ or is of the form $h(z) = z^a (1 + u(z))$. \vspace{0.1cm} \noindent {\it Claim 1}. $h$ does not preserve the orbits of $\phi_{i\mathcal{X}}$. \noindent {\it Proof of Claim 1}. Since $h$ does not preserve $\mathcal{X}$ up to a constant factor, it follows that $$ h^{\ast} \mathcal{X} = c_h (z) \mathcal{X} $$ where $c_h (z)$ is a non-constant holomorphic function on its domain. By construction the purely imaginary flow associated to $h^{\ast} \mathcal{X}$ is contained in $\overline{\Gamma}$ since so is $\phi_{i\mathcal{X}}$. Denoting by $\mathcal{X}_1$ the (real) vector field associated to this (real) $1$-parameter group, we see that $\mathcal{X}_1$ is $\R$-linearly independent with $\mathcal{X}$ at points in the (non-empty) open set where $c_h$ takes values in $\C \setminus \R$. This proves the claim.\qed Next we have: \vspace{0.1cm} \noindent {\it Claim 2}. The vector fields $\mathcal{X}$ and $\mathcal{X}_1$ do not commute. Besides $[\mathcal{X}, \mathcal{X}_1]$ is not constant. \noindent {\it Proof of Claim 2}. Recall that $\mathcal{X} = z \partial /\partial z$ whereas $\mathcal{X}_1 = c_h (z)z \partial /\partial z$. Hence $$ [\mathcal{X}, \mathcal{X}_1] = -z^2 c_h'(z) \, . $$ Since we have assumed that $c_h$ is not constant we conclude that $c_h'(z)$ is not identically zero and the claim follows.\qed Consider a (``generic'') point $p$ at which $\mathcal{X}, \; \mathcal{X}_1$ are $\R$-linearly independent and where $[\mathcal{X}, \mathcal{X}_1] (p) =(a+ib) \partial /\partial z$ with $a \neq 0$. The existence of these points follows from Claim~2. For fixed small reals $s,t$, consider the local diffeomorphism $D^{st}$ fixing $p$ that is obtained as follows: points close to $p$ are moved by following the (purely imaginary) flow of $\mathcal{X}$ during a time~$s$ and then we compose this with the purely imaginary flow of $\mathcal{X}_1$ during a time~$t$. Finally we still compose the mentioned map with the (always purely imaginary) flow of $\mathcal{X}$ (and then $\mathcal{X}_1$) during a time $s'$ close to $s$ (resp. $t'$ close to $t$) so as to have $p$ as a fixed point of the resulting map $D^{st}$. Clearly the map $D^{st}$ belongs to $\overline{\Gamma}$ for small $s,t$. It must therefore preserve the transverse measure over $\Sigma$ identified to a neighborhood of $0\in \C$. Therefore to derive a contradiction proving the statement, it suffices to show that $p$ is a hyperbolic fixed point provided that $s,t$ are small enough. To check this simply note that the derivative of $D^{st}$ at $p$ is given by $$ 1 -st [\mathcal{X}, \mathcal{X}_1] (p) + o\, (s^2 +t^2) = 1 -st (a+ib) + o\, (s^2 +t^2)\, . $$ Since $\Vert 1 -st (a+ib) \Vert = 1 -2sta + s^2 t^2 (a^2 +b^2)$ with $a\neq 0$, it follows that the norm of the derivative of $D^{st}$ at $p$ is different from~$1$ provided that $s,t$ are very small. This finishes the proof of the proposition.\qed \vspace{0.1cm} \centerline{\sc {\large Examples}} Let us close this paper with two classes of non-trivial examples of foliations for which Theorems~A and~B in the Introduction can immediately be applied. In the sequel consider a foliation $\fol$ on a surface $M$ along with a global meromorphic form $\omega$ defining $\fol$. Unless otherwise stated we choose $\omega$ so that its divisor of zeros and poles $(\omega)_{\infty} \cup (\omega)_0$ is disjoint from the finite set formed by the singularities of $\fol$. To further simplify the discussion, suppose also that all singularities of $\fol$ have at least one eigenvalue different from zero. In view of Seidenberg's theorem our construction can easily be adapted to include foliations with degenerate singularities so that we shall not worry about them here. Obviously the simplest way to ensure the existence of $\calh$-trajectories with infinite length consists of eliminating the singular points of $\calh$ that give rise to either future endpoints or to past endpoints for trajectories of $\calh$. Under the above assumptions we have: \begin{itemize} \item All points giving rise to past-ends for trajectories of $\calh$ are contained in $(\omega)_{\infty}$. \item All points giving rise to future-ends for trajectories of $\calh$ are contained in the union of $(\omega)_0$ with the singular set ${\rm Sing}\, (\fol)$ of $\fol$. \end{itemize} In addition for a singular point of $\fol$ to yield future ends for $\calh$-trajectories the ratio between its eigenvalues must belong to $\R_+^{\ast}$. To apply our theorems we need to have $(\omega)_{\infty} \neq \emptyset$ for otherwise $\omega$ is holomorphic and hence closed. Thus this particular choice of $\omega$ does not yield an associated foliation $\calh$. Hence a natural idea is to eliminate the possibility future-ends for the trajectories of $\calh$. With this idea in mind we shall provide our first class of examples. \noindent {\sc Example 1}. Foliations with $(\omega)_0 = \emptyset$ and singularities whose eigenvalues have quotients in $\C \setminus \R_+^{\ast}$. We note that the class of foliations above include those with singularities in the Siegel domain. Saddle-node singularities can also be authorized for $\fol$. Suppose that $T$ is a diffuse closed current invariant by $\fol$. The first important remark to be made about $T$ concerns its co-homology class in $M$. In fact, under the conditions regarding the singularities of $\fol$ the following holds: $$ [T].[T] =0 $$ ie. the self-intersection of $T$ vanishes. The proof of the above equation is easy and essentially amounts to the fact that Siegel singularities cannot contribute non-trivially for the self-intersection of a diffuse current, see for example \cite{marco}. Naturally, as seen in Section~4, all singularities of $\fol$ lying in the support of $T$ must belong to the Siegel domain since the support of a diffuse current as above cannot contain either hyperbolic singularities or saddle-nodes, cf. Section~4. This case is therefore of little interest for surfaces, such as $\C P (2)$, whose Picard group is cyclic. However for surfaces with larger Picard group the condition about the self-intersection of $T$ conveys less information. For example let $M$ be an affine elliptic $K3$ surface in $\C^3 \subset \C P(3)$ (i.e. the closure of $M$, still denoted by $M$, in $\C P(3)$ is a $K3$-surface). Indeed, we can choose $M$ to be the usual Fermat's quartic. Next consider a polynomial vector field $X$ tangent to $M$ and having isolated singularities. The vector field $X$ induces a foliation $\fol$ over $M \subset \C P(3)$ whose singularities satisfy the above conditions modulo choosing $X$ ``generic''. By exploiting the triviality of the canonical bundle of $M$, we can easily find a $1$-form $\omega$ on $M$ defining $\fol$ and having empty divisor of zeros. Furthermore the divisor of poles of $\omega$ coincides with the hyperplane section of $M$ and, in general, $\omega$ is not closed. Now if $T$ is a closed current invariant by $\fol$, the condition that $[T].[T] =0$ says that $[T]$ is the cohomology class of an elliptic fiber of $M$. However there is {\it a priori}\, no reason to conclude the existence of any compact leaf for $\fol$ unless Theorem~A is used. \noindent {\sc Example 2}. Foliations on $\C P(2)$ with singularities whose quotient of eigenvalues belong to $\R_+$. A disadvantage of the construction employed in Example~1 is that, after all, it depends on the fact that only Siegel singularities can appear on the support of a diffuse (closed positive) invariant current. Similarly it has been know since long that a foliation of $\C P(2)$ all of whose singularities are hyperbolic cannot admit a diffuse current as above. To be able to make a significant progress with respect to these well-known results, it is interesting to allow the foliation to have simple singularities of type ``irrational focus''. In fact, these singularities may contribute non-trivially to self-intersection of $T$ and they do not contradict the diffuse nature of $T$. The example below is intended to show that our theorem can be used to provide new results in this context. To simplify suppose that this happens at a single singularity $p$ with eigenvalues $1,1$ (ie. in local coordinates about $p$ $\fol$ is given by the radial vector field $x\partial /\partial x + y \partial /\partial y$). The remaining singularities of $\fol$ being hyperbolic, of Siegel type or saddle-nodes. The existence of $p$ prevents us from concluding that $[T].[T] =0$. Nonetheless we claim the following: \noindent {\it Claim}. If $\fol$ admits a closed current $T$, then it must also possess an algebraic curve provided that the degree of $\fol$ is at least~$3$. \noindent {\it Proof}. Choose affine coordinates so that $p$ belongs to the corresponding line at infinity $\Delta$. In the affine $\C^2$ we choose a polynomial form $\omega$ representing $\fol$ and such that its components have only trivial common factors. Viewed as a meromorphic $1$-form on $\C P(2)$, the divisor of zeros of $\omega$ is empty whereas $\omega$ has poles of order $d-1$ over $\Delta$ where $d$ stands for the degree of $\fol$. Thanks to Theorem~A to prove the claim it suffices to show that all the trajectories of the resulting foliation $\calh$ have infinite length. In turn, for this, it is enough to show that $p$ does not provide future endpoints for these trajectories. To check the latter claim, consider local coordinates $(x,y)$ about $p$, $\{ x=0\} \subset \Delta$, where $\omega = v(x,y) x^{1-d} (xdy - y dx)$ with $v(0,0) \neq 0$. If we blow-up $\fol$ at $p$, the new foliation $\widetilde{\fol}$ has no longer singularities over the exceptional divisor which, in fact, is transverse to $\widetilde{\fol}$. In standard $(x,t)$ coordinates for the blow-up the pull-back of $\omega$ is given by $$ v(0,0) x^{3-d} dt \, . $$ In particular the pull-back of $\omega$ does not have zeros over the exceptional divisor since $v(0,0) \neq 0$ and $d \geq 3$. Therefore the initial singularity $p$ does not provide future endpoints for the trajectories of $\calh$. The claim is proved.\qed Naturally the eigenvalues of $\fol$ at $p$ may be supposed to have only the form $1, \lambda$, where $\lambda \in \R_+^{\ast}$. Besides, if the degree of $\fol$ is at least~$4$, we can allow the existence of two (rather than one) irrational focus singularities for $\fol$. Several other combinations of these ideas can be used to provide new results on the structure of invariant curves for foliations as above.
1202.1528
\section{Introduction} AdS/CFT and its generalizations play a major role in recent developments of theoretical physics. Examples which are related to realistic physical objects are, however, still rare. The ultimate goal of the Kerr/CFT correspondence is to describe the black holes of our universe in terms of a dual two dimensional conformal field theory. The concrete proposal of \cite{Guica:2008mu} is that the near horizon region of a near-extremal Kerr black hole (the so called near-NHEK geometry) is dual to a two dimensional conformal field theory. Even though we are still far from describing a real black hole, many tests supporting this conjecture have appeared in the literature so far (see \cite{Bredberg:2011hp} for a review). In particular, the scattering amplitudes for spinor fields computed in \cite{Hartman:2009nz} (see also \cite{Chen:2010ni}) were found to be in agreement with the conformal field theory result. Spinor fields in AdS/CFT and its non-relativistic generalizations are particularly interesting to study, as their correlation functions are many times related to semi-realistic physical observables such as the spectral function. In this note we would like to revisit spinor fields in Kerr/CFT. We would like to consider spinor fields in the near-NHEK geometry \begin{equation}\label{nearNHEK} ds^2=2J\Gamma\Big(-r(r+4\pi T_R)dt^2+\frac{dr^2}{r(r+4\pi T_R)}+d\theta^2+\Lambda^2\left(d\phi+(r+2\pi T_R)dt\right)^2\Big) \ , \end{equation} where \begin{equation*} \Gamma(\theta)=\frac{1+\cos^2\theta}{2} \ , \qquad \Lambda(\theta)=\frac{2\sin\theta}{1+\cos^2\theta} \ , \qquad\phi\sim\phi+2\pi,\ 0\le\theta\le\pi\ . \end{equation*} More concretely, we would like to calculate two-point correlation functions for spinor fields in this geometry. Recall that in AdS/CFT spinor field correlation functions are slightly more involved than correlation functions for scalars. Let's recapitulate some highlights for spinor fields in AdS/CFT, which will become handy later on (see \cite{Henningson:1998cd}, \cite{Mueck:1998iz}, \cite{Henneaux:1998ch}, \cite{Iqbal:2009fd} for more details). The key assumption of the correspondence is the equivalence between the partition functions of a CFT in $d$-dimensions and a bulk gravitational theory in $(d+1)$-dimensions \begin{equation}\label{eins} \langle \exp\left( \int d^d x\, \left(\bar\chi_0{\cal O}+\bar{\cal O}\chi_0\right)\right)\rangle_{{\rm QFT}}=e^{-S_{\rm{grav}}(\chi_0,\bar \chi_0)}. \end{equation} In this formula $\chi_0$ is the asymptotic value of the $(d+1)$-bulk spinor $\psi$ \begin{equation}\label{zwei} \lim_{r\rightarrow\infty}\, \psi\sim\chi_0 \, , \end{equation} that couples to the conformal field theory operator ${\cal{O}}$. The above formula tells us that to calculate correlation functions of the CFT operator ${\cal O}$, one needs to evaluate the gravitational action $S_{\rm{grav}}$ for solutions to the Dirac equation with proper boundary conditions. The gravitational action functional contains a bulk term described by the standard Dirac action that vanishes for solutions to the equations of motion. In addition there is a boundary term \cite{Henningson:1998cd}, \cite{Mueck:1998iz},\cite{Iqbal:2009fd} which is non-vanishing for solutions to the equations of motion. Correlation functions of the CFT operator $\cal{O}$ are determined by this boundary term. For example, the two point function of two conformal field theory operators ${\cal O},\bar {\cal {O}}$ is given by the functional derivative of the boundary term $S_{\rm {bdry}}$ \begin{equation}\label{2ptfct} \langle {\cal O}\bar{{\cal} {O}}\rangle = \frac{\delta^2S_{\rm {bdry} }}{ \delta \bar \chi_0\delta {\chi}_0}. \end{equation} As nicely shown in \cite{Henneaux:1998ch}, the form of the gravitational boundary term is dictated by the variational principle. More recently it was shown that different boundary terms all satisfying the variational principle can be added to the bulk action \cite{Laia:2011zn}. Different boundary terms lead to different conformal field theories. Having the explicit form of the boundary term, the main challenge is to find solutions to the Dirac equation with proper boundary conditions. The situation here is a bit more involved than for a simple scalar, because bulk and boundary spinors live in different dimensions and thus have different number of components in the minimal representation. A formula like \eqref{zwei} needs to be interpreted with more care. Since $\psi$ is a $(d+1)$-dimensional spinor it contains twice as many degrees of freedom as $\chi_0$ that lives in $d$-dimensions. Only half of the components of $\psi$ can be fixed by $\chi_0$. The other half is determined in terms of the first by the Dirac equation. Therefore, $\psi$ is decomposed into two eigenstates of a projection operator \begin{equation} \psi=\psi_++\psi_-\ , \qquad \psi_\pm=\Gamma_\pm \psi, \quad \rm{with}\quad \Gamma_{\pm}=\frac{1}{2}(1\pm\Gamma^r). \end{equation} The explicit form of the projection operator depends on the dimension of the boundary. Details can be found in e.g \cite{Iqbal:2009fd}. The upshot is that for generic values of the spinor mass $\mu$, the $\psi_+$ spinor is the leading component in the large $r$ expansion. This spinor corresponds to the source that is fixed by the boundary condition and which couples to the conformal field theory operator\footnote{There is a small range of values for the mass $\mu$ in which $\psi_-$ rather than $\psi_+$ is fixed by boundary conditions.} \begin{equation}\lim_{r\rightarrow \infty} r^{d/2-\mu}\psi_+=\chi_0\, . \end{equation} The spinor $\psi_-$ is determined in terms of $\psi_+$ by the Dirac equation and vanishes as it approaches the boundary. Once the spinors solving the equations of motion with proper boundary conditions are known, the evaluation of the gravitational action functional (more precisely the boundary term) will lead to the CFT correlation functions, as previously mentioned. Similar in spirit, in this paper we show that a boundary term needs to be added to the Dirac action for spinor fields in the near-NHEK geometry for the variational principle to be satisfied. The boundary term is the key ingredient for the calculation of the fermionic correlation functions. Using the proposed boundary term, it is shown that the bulk fermionic two-point function agrees with the two-point function of a two dimensional conformal field theory. Some additional care, however, is required because we shall perform our calculation in Lorentzian signature, rather than analytically continuing to Euclidean signature. The reason is that we are not aware of an Euclidean version of the near-NHEK metric. A Lorentzian version of AdS/CFT (where the action carries an `$i$') \begin{equation}\label{minpart} \langle \exp\left( i\int d^d x\, \left(\bar\chi_0{\cal O}+\bar{\cal O}\chi_0\right)\right)\rangle_{{\rm QFT}}=e^{-iS_{\rm{grav}}(\chi_0,\bar \chi_0)}, \end{equation} leads to some additional subtleties that are well known in the context of AdS/CFT (see \cite{Marolf:2004fy} for a discussion). As first explained in \cite{Son:2002sd} (and later reformulated by \cite{Iqbal:2009fd}), having complex solutions to the equations of motion requires us to amend the Lorentzian version of AdS/CFT with some further constraints: (1) To evaluate the action functional appearing on the right hand side of \eqref{minpart} we should consider the solutions to the equations of motion with incoming boundary conditions. (2) To evaluate boundary terms of the action, we should not consider any contributions coming from the horizon. (3) Applying the Euclidean AdS/CFT prescription to the Lorentzian theory means that the desired correlator plus its complex conjugate appear once the functional derivative of the gravitational action functional is taken. The correct result for the correlator is given by one of these contributions, while the other should be discarded. We shall see that these three constraints plus the equivalence of partition functions \eqref{minpart} provides the correct fermionic correlation function in Kerr/CFT. This paper is organized as follow. In Section 2 we consider the variational principle for spinor fields in Kerr/CFT and determine the boundary term. In Section 3 we perform the calculation of the fermionic two-point function using the proposed boundary term. In Section 4 we show how the result of Section 3 can be matched with a two dimensional relativistic conformal field theory. Our conclusions appear in Section 5. In Appendix \ref{appA} we present the features of the near-NHEK geometry we need for the calculation of correlation functions, while Appendix \ref{appB} is left for notations and conventions. \section{Variational Principle} The bulk action for fermions with mass $\mu$ in the near-NHEK geometry is the standard Dirac action \begin{equation}\label{bulk} S_{\rm {bulk}}=i\int d^4 x \sqrt{-g}\bar \psi \left( {\slashed D} -\mu\right)\psi, \end{equation} where we have dropped an overall normalization factor. We use the representation of the four dimensional bulk gamma matrices as given in appendix \ref{appB}. To determine the boundary conditions on the spinor we calculate the variation of the action which is given by \footnote{The boundary of the near-NHEK geometry is described by large but finite $r$, $r_B\gg 1$, such that $r_+-r_-\ll \lambda r_B\ll 1$, where $\lambda$ goes to zero. $r_+, r_-$ are the positions of the outer and inner horizons of the Kerr black hole.} \begin{equation}\label{nine} \delta S_{\rm{bulk}}=i\int_{r=r_B}\, d^3x \sqrt{-g_B} \bar \psi \Gamma^r \delta \psi+\ldots, \end{equation} where the dots denote terms that vanish by the equations of motion. Here $g_B=g g^{rr}$ describes the induced boundary metric and $r_B$ is the cutoff describing the boundary of the near-NHEK geometry. The gamma matrix $\Gamma^r$ in the near-NHEK geometry takes the form \begin{equation} \Gamma^r=-\frac{r(r+4\pi T_R)}{8J\Gamma}(\Gamma^0+\Gamma^3)+\frac{1}{2}(\Gamma^0-\Gamma^3). \end{equation} Boundary conditions need to be imposed so that the variation of the gravitational action vanishes. To do so, we take a closer look at the spinor solving the Dirac equation \begin{equation} ({\slashed D} -\mu)\psi=0\, . \end{equation} The solution to this equation in the Kerr geometry was worked out by Chandrasekhar in the late seventies \cite{Chandrasekhar:1976ap}. Using the Newman-Penrose formalism he showed that the Dirac equation can be separated into a radial and an angular equation. Finding an analytical expression for the solution proved, nevertheless, to be very difficult. For a long time only numerical solutions were available. More than thirty years later, an analytic expression for the solution to the Dirac equation in the near-NHEK limit was obtained in \cite{Hartman:2009nz}. In this limit (described to the necessary details in Appendix \ref{appA}) the spinor computed in \cite{Hartman:2009nz} takes the form \begin{equation}\label{spinorsolution} \psi=e^{-i n_R t+i n_L \phi} \left(\begin{matrix} -R_{1/2}S_{1/2}\\ \\ \frac{R_{-1/2}S_{-1/2}}{\sqrt{2}M(1-i\cos(\theta))}\\\\ -\frac{R_{-1/2}S_{1/2}}{\sqrt{2}M(1+i\cos(\theta))}\\\\ R_{1/2}S_{-1/2}\\ \end{matrix}\right)\, , \end{equation} where $R_{\pm 1/2}= R_{\pm 1/2}(r)$ describes the radial dependence and $S_{\pm 1/2}=S_{\pm 1/2}(\theta)$. Even though the radial part $R_{\pm 1/2}$ of the solution is in general a hypergeometric function, we only need its asymptotic expression (for large but finite $\lambda r$. The solution with infalling boundary conditions is \begin{eqnarray}\label{R} R_{1/2}(r)&=&N_{1/2}T_R^{-in_R/2-1/2}\left(A_{1/2}\left(\frac{ r}{T_R}\right)^{-1+\beta}+B_{1/2}\left(\frac{ r}{T_R}\right)^{-1-\beta}\right)+\ldots\\ R_{-1/2}(r)&=&N_{-1/2}T_R^{-in_R/2+1/2}\left(A_{-1/2}\left(\frac{ r}{T_R}\right)^{\beta}+B_{-1/2}\left(\frac{r}{T_R}\right)^{-\beta}\right)+\ldots \end{eqnarray} The coefficients appearing in these expressions are defined in terms of gamma functions \begin{eqnarray}\label{AB} A_s&=&\frac{\Gamma(1-i(n_R+n_L)-s)\Gamma(2\beta)}{\Gamma(\frac{1}{2}+\beta-in_R)\Gamma(\frac{1}{2}+\beta-in_L-s)}\ , \nonumber\\ B_s&=&\frac{\Gamma(1-i(n_R+n_L)-s)\Gamma(-2\beta)}{\Gamma(\frac{1}{2}-\beta-in_R)\Gamma(\frac{1}{2}-\beta-in_L-s)} \ . \end{eqnarray} The $N$'s describe normalization factors \begin{equation}\label{beta} \frac{N_{1/2}}{N_{-1/2}}=\frac{1/2-i(n_R+n_L)}{M(\Lambda_\ell+i\mu M)}\, ,\qquad \beta^2+n_L^2=\Lambda_\ell^2+\mu^2 M^2. \end{equation} In this paper we restrict to real values of $\beta$ for simplicity. Similarly as in AdS/CFT, only half of the components of $\psi$ can be fixed at the boundary (the other half is related to the first half by the Dirac equation and will vanish at the boundary). To decide which components of $\psi$ we would like to fix, it is convenient to introduce projection operators \begin{equation} P_\pm=\frac{1}{2}\left(1\pm\Gamma^0\Gamma^3\right)\, , \end{equation} which satisfy $P_+^2=P_+$, $P_-^2=P_-$ and \begin{equation} \Gamma^0\pm \Gamma^3=\Gamma^0 P_\pm=P_\mp \Gamma^0. \end{equation} These operators allow us to write the bulk spinor in terms of projector eigenstates as \begin{equation} \psi=\psi_++\psi_- \, . \end{equation} Here $\psi_+$ satisfies \begin{equation} P_+\psi=\psi_+=e^{-in_R t+i n_L\phi}R_{1/2} \left(\begin{matrix} -S_{1/2} \\ 0\\ 0\\ S_{-1/2}\\ \end{matrix}\right)\, , \end{equation} while $\psi_-$ obeys \begin{equation} P_-\psi=\psi_-=e^{-in_R t+i n_L\phi}\frac{R_{-1/2}}{\sqrt{2}M} \left(\begin{matrix} 0\\ \frac{S_{-1/2}}{1-i\cos \theta} \\ -\frac{S_{1/2}}{1+i\cos \theta}\\ 0 \end{matrix}\right), \end{equation} and conjugate spinors satisfy \begin{equation} \bar\psi P_\pm=\bar\psi_\mp. \end{equation} To decide whether $\psi_+$ or $\psi_-$ is the source (which gets fixed at the boundary), we notice that there is a relation between both boundary spinors\footnote{The precise relation is given in the next section.} \begin{equation} \psi^{B}_+\sim { R_{1/2}^B \over R_{-1/2}^B}\psi^{B}_-, \end{equation} where the index $B$ denotes boundary quantities. Taking into account \eqref{AB}, this relation tells us that for real $\beta$ we should treat $\psi^B_-$ as the source, while $\psi^B_+$ vanishes at the boundary. We can now proceed to evaluate the boundary term. To do so it is convenient to write the $\Gamma^r$ matrix in terms of the projection operators \begin{equation}\label{ten} \Gamma^r=\frac{1}{8J\Gamma}r(r+4\pi T_R)P_-\Gamma^0 P_+-\frac{1}{2}P_+\Gamma^0 P_-. \end{equation} It is easy to see that the boundary term \eqref{nine} becomes \begin{equation}\label{eleven} \delta S_{\rm {bulk}}=i\int_{r=r_B}\, d^3x\, \sqrt{-g_B}\left( \frac{1}{8J\Gamma}\, r_B(r_B+4\pi T_R)\, \psi^\dagger_+\delta\psi_+-\frac{1}{2}\psi_-^\dagger\delta\psi_-\right). \end{equation} We had seen that $\psi_-$ is the source, so this spinor and its conjugate are fixed at the boundary \begin{equation}\label{cond} \delta\psi_-\Big|_{r_B}=0 \, ,\quad \delta\bar \psi_-\Big|_{r_B}=0 \, . \end{equation} To cancel the contribution proportional to $\delta \psi_+$ we need to add a boundary term \begin{equation}\label{bndry_term} S_{\rm{bdry}} =-\frac{r_B(r_B+4\pi T_R)}{8J\Gamma}\, i\int_{r=r_B} d^3 x \, \sqrt{-g_B}\, \psi_+^\dagger\psi_+. \end{equation} This guarantees that the variation of the total action vanishes\footnote{Here we used $\delta\psi^\dagger_+\Big|_{r_B}=0$, since $\psi_+^\dagger\sim \bar \psi_-$.} \begin{equation}\label{twelve} \delta S_{\rm {total}}=\delta S_{\rm {bulk}} + \delta S_{\rm {bdry}}=0. \end{equation} It is interesting to observe that the boundary term \eqref{bndry_term} looks similar to the non-relativistic boundary terms recently considered in \cite{Laia:2011zn}. There it was argued that non-relativistic conformal field theories can be generated through Lorentz violating boundary terms, even though the underlying bulk theory is Lorentz invariant. One may wonder if the conformal field theory dual to the near-NHEK geometry could be non-relativistic. Some recent discussion on the possible connection between Kerr/CFT and non-relativistic conformal field theory has appeared recently in the literature \cite{ElShowk:2011cm}. A more extensive analysis is needed to answer this question. It is interesting to notice that the boundary term can be written as \begin{equation}\label{bound} S_{\rm {bdry}}=i\int_{r=r_B}\, d^3 x \sqrt{-g_B} \, \bar \psi \Gamma^r\psi, \end{equation} up to contact terms. This expression is familiar from the fermionic flux derived in \cite{Martellini:1977qf}, \cite{Iyer:1978du}. There it was shown that superradiance does not occur for a fermionic field in a Kerr geometry as the particle flux into the black hole is always positive. Precisely the same expression for the fermionic flux entered the scattering calculation done in \cite{Hartman:2009nz}, so the above boundary term does not come as a surprise. It is nice to see this expression emerge from the variational principle. \section{Fermionic Two-Point Function: the Bulk} To calculate correlation functions for spinors in the near-NHEK geometry we need to evaluate the boundary term \eqref{bound} for spinors satisfying the equations of motion. We would like to express the bulk spinor \eqref{spinorsolution} in terms of its value on the boundary. To do so it is convenient to off the $\theta$ dependence by introducing spinors $a^{\pm}$ \begin{equation} \psi=e^{-i n_R t+i n_L \phi}\left( R_{1/2} \underbrace{\left(\begin{matrix}-S_{1/2}\\0 \\ 0 \\S_{-1/2} \end{matrix} \right)}_{a_+} +R_{-1/2} \underbrace{\left( \begin{matrix} A & 0&0&0\\ 0&\frac{1}{\sqrt{2}M(1-i\cos \theta)}&0&0\\ 0&0&\frac{1}{\sqrt{2}M(1+i\cos \theta)}&0\\ 0 & 0&0&B \end{matrix} \right)}_{Z} \underbrace{\left(\begin{matrix}0\\S_{-1/2}\\-S_{1/2}\\0 \end{matrix} \right)}_{a_-} \right) \end{equation} $A$ and $B$ are arbitrary non-zero entries, so that $Z$ is invertible. The eigenstates of the projection operator $\psi_\pm$ can be conveniently written as \begin{eqnarray}\label{fourfour} \psi_+&=&e^{-in_R t+in_L \phi}R_{1/2} a_+,\nonumber\\ \psi_-&=&e^{-in_R t+in_L \phi}R_{-1/2}Za_-. \end{eqnarray} Since there is a relation between $a_+$ and $a_-$, $\Gamma^0 a^+=a^-$, we can write the bulk spinor in terms of $a_-$ only \begin{equation} \psi=e^{-in_R t+i n_L \phi}\left( R_{1/2}\Gamma^0+R_{-1/2}Z \right) a_-. \end{equation} Using equation (\eqref{fourfour}) we can express this spinor and its conjugate in terms of boundary data \begin{eqnarray} \psi&=&\left(R_{1/2}\Gamma^0 Z^{-1}+R_{-1/2}\right)\frac{\psi^B_-}{R_{-1/2}^B},\nonumber\\ \bar \psi& =& \frac{\bar \psi^{B}_-}{\bar R_{-1/2}^B}\left(\bar R_{1/2} \Gamma^0Z^{*-1}+\bar R_{-1/2}\right), \end{eqnarray} where the bar on $\bar R_{\pm 1/2}$ means complex conjugation. To apply the prescription \eqref{2ptfct} for computing the boundary two-point function, we would like to express the boundary term as a double integral over momenta. This will allow us to take the functional derivative. Fourier transforming along the $t$ are $\phi$ directions we introduce new spinors \begin{eqnarray} \psi_F (r,\theta,n_L,n_R)&=&\delta(n_L-n_L')\delta(n_R-n_R')\Big(R_{1/2} (r, n_L',n_R')\Gamma^0 Z^{-1}+R_{-1/2}(n_L',n_R')\Big)\frac{ \psi^B_-(\theta,n_L',n_R')}{R_{-1/2}^B(n_L',n_R')}\nonumber\\ {\bar \psi}_F (r,\theta, n_L,n_R)&=&\delta(n_L-n_L')\delta(n_R-n_R')\,\frac{{\bar \psi}^{B}_-(\theta,n_L',n_R')}{\bar R_{-1/2}^B(n_L',n_R')} \Big(\bar R_{1/2}(r,n_L',n_R')\Gamma^0 Z^{*-1}+\bar R_{-1/2}(r,n_L',n_R')\Big)\nonumber \end{eqnarray} where $n_L$ and $n_R$ are the momenta dual to the coordinates $t$ and $\phi$. We insert $\psi$ and $\bar\psi$ into the boundary term \begin{eqnarray} \int d\theta\, \int dt\, d\phi\, \sqrt{-g} \,\bar \psi \Gamma^r\psi\Big|_{r=r_B}&=&\int d\theta\, \sqrt{-g_B}\, \int dn_L' dn_R' \, \int dn_L'' dn_R''\, \delta(n_L'-n_L'')\, \delta(n_R'-n_R'')\times \nonumber\\ &&\times {\bar \psi}_F (r_B,\theta, n_L',n_R')\, \Gamma^r\, { \psi}_F (r_B, \theta,n_L'',n_R''), \end{eqnarray} where the determinant of the metric only depends on $\theta$ and the cutoff $r_B$ \begin{equation} \sqrt{-g_B}=(2J\Gamma(\theta))^{3/2}\Lambda(\theta)\, r_B. \end{equation} Using the explicit form of $\Gamma^r$ and the properties of the projection operator listed in appendix \ref{appB}, we can evaluate the integrand of the boundary term \begin{eqnarray}\label{bdr} {\bar \psi}_F (r_B,\theta, n_L',n_R')&\Gamma^r&{ \psi}_F (r_B,\theta, n_L'',n_R'')=-\frac{r_B^2}{8J\Gamma}{\bar \psi}_+\Gamma^0\psi_+ +\frac{1}{2}{\bar \psi}_-\Gamma^0\psi_-\nonumber\\ &=&-\frac{r_B^2}{8J\Gamma}\frac{R^B_{1/2}\bar R^B_{1/2}}{R_{-1/2}^B \bar R_{-1/2}^B}{\bar \psi}_-^B\Gamma^0 |Z^{-1}|^2\psi_-^B+\frac{1}{2}{\bar\psi}^B_-\Gamma^0 \psi_-^B, \end{eqnarray} where we have dropped the coordinate dependency on the rhs for simplicity of the notation. We would like to factor out the $\theta$-dependency of the boundary term. To do so notice that $\psi_-^{B}$ can be split into a two dimensional chiral spinor $\chi_0$ and a boundary spinor describing the theta dependence \begin{equation} \psi_{-}^B=\chi_{0}\otimes(S^+\oplus S^-). \end{equation} In the four dimensional representation space we know each spinor explicitly \begin{equation} \psi^{B}_-(\theta, n_R,n_L)=\underbrace{\delta(n_L-n_L')\delta(n_R-n_R')R_{-1/2}^B(n_L',n_R')}_{\chi_0(n_L',n_R')} \left ( \underbrace{Z\left( \begin{matrix} 0\\S_{-1/2}\\0\\0 \end{matrix} \right)}_{S^+}+ \underbrace{Z\left( \begin{matrix} 0\\0\\-S_{1/2}\\0 \end{matrix} \right)}_{S^-} \right ). \end{equation} \begin{equation} \bar\psi^{B}_-(\theta, n_R,n_L)=\underbrace{\delta(n_L-n_L')\delta(n_R-n_R')\bar R_{-1/2}^B(n_L',n_R')}_{\bar \chi_0(n_L',n_R')} \left ( \underbrace{Z\left( \begin{matrix} 0\\S_{-1/2}\\0\\0 \end{matrix} \right)}_{S^+}+ \underbrace{Z\left( \begin{matrix} 0\\0\\-S_{1/2}\\0 \end{matrix} \right)}_{S^-} \right )^\dagger\, \Gamma^0. \end{equation} Inserting this into \eqref{bdr} the boundary term we notice that the theta dependence of the relevant contribution (the first term of the expression below) can be factored out \begin{eqnarray} &&\int d\theta \sqrt{-g_B}(|S_{1/2}|^2+|S_{-{1/2}}|^2) \int dn_L' dn_R'\, \int \, dn_L'' dn_R''\, \bar \chi_0(n_L',n_R')\chi_0(n_L'',n_R''))\times\nonumber\\ &&\times\delta(n_L'-n_L'')\delta(n_R'-n_R'')\times \left(-\frac{r_B^2}{8J\Gamma}\frac{R^B_{1/2}(r_B, n_L'',n_R'')\bar R^B_{1/2}(r_B, n_L',n_R')}{R_{-1/2}^B(r_B, n_L'',n_R'')\bar R_{-1/2}^B(r_B, n_L',n_R')}\ +\frac{1}{4M^2(1+\cos ^2\theta)}\right)\, .\nonumber\\ \end{eqnarray} The second term in this expression describes a contact term that can be ignored, so we evaluate the first term. To do so we expand $R_{1/2}$ and $R_{-1/2}$ around $r_B$ using equations \eqref{R}-\eqref{beta} to evaluate individual contributions. We are left with \begin{eqnarray} &&\frac{{\delta}^2 S}{\delta\chi_0(n_L,n_R)\bar\delta\chi_0(n_L,n_R)}\sim r_B^3\,\frac{R^B_{1/2}( n_L,n_R)\bar R^B_{1/2}( n_L,n_R)}{R_{-1/2}^B(n_L,n_R)\bar R_{-1/2}^B( n_L,n_R)}\nonumber\\ &=&\frac{N_{1/2}}{N_{-1/2}}\, \frac{\bar N_{1/2}}{\bar N_{-1/2}}\,\left(\frac{A_{1/2}}{A_{-1/2}}\,\frac{\bar A_{1/2}}{\bar A_{-1/2}}r_B +\frac{B_{1/2}}{A_{-1/2}}\,\frac{\bar A_{1/2}}{\bar A_{-1/2}}T_R^{2\beta}r_B^{-2\beta+1}+\frac{A_{1/2}}{A_{-1/2}}\,\frac{\bar B_{1/2}}{\bar A_{-1/2}}T_R^{2\beta}r_B^{-2\beta+1}+{\cal O}(r^{-4\beta+1})\right) \nonumber\\ &=&\frac{1}{M^2}r_B+\left(\mu+\frac{i\Lambda_\ell}{M}\right) \frac{\bar N_{1/2}\bar A_{1/2}} {\bar N_{-1/2}\bar A_{-1/2} }\, G_R(n_L,n_R)r_B^{-2\beta+1}+\left(\mu-i\frac{\Lambda_\ell}{M}\right) \frac{ N_{1/2} A_{1/2}} { N_{-1/2} A_{-1/2} }G^*_R(n_L,n_R)r_B^{-2\beta+1}\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+{\cal O}\left(r^{-4\beta+1}\right) \end{eqnarray} with \begin{equation}\label{GR} G_R(n_L,n_R)=-\frac{i}{\beta+in_L}\frac{\Gamma(-2\beta)}{\Gamma(2\beta)}\frac{\Gamma(\beta-in_L)}{\Gamma(-\beta-in_L)}\frac{\Gamma(\frac{1}{2}+\beta-in_R)}{\Gamma(\frac{1}{2}-\beta-in_R)}T_R^{2\beta} \end{equation} The first term above is obviously the contact term. The second and third terms are complex conjugate to each other. Similarly as for the scalar two point function in Lorentzian AdS/CFT considered in \cite{Son:2002sd}, this means that the two point function is real, which is not what we want. The proposal of \cite{Son:2002sd} is to drop the complex conjugate solution. The $r_B$-factor here can be absorbed into $\chi_0$ by rescaling \begin{equation} \chi_0\rightarrow r_B^{\beta-1/2}\chi_0. \end{equation} Last, we factored out the ratio $\frac{N_{1/2}A_{1/2}}{N_{-1/2}A_{-1/2}}$ which is momentum dependent but not part of the two point function. A similar factor emerges in AdS/CFT calculations \cite{Iqbal:2009fd}. The expression \eqref{GR} agrees with the proposal of \cite{Chen:2010ni}. Recall that the relation between $G_R$ and the absorption probability $\sigma$ is ${\rm{Im}}\, G_R\sim \sigma$. The Greens function we calculated precisely gives the absorption probability of \cite{Bredberg:2009pv}.\footnote{We thank Tom Hartman for pointing this out.} \section{Fermionic Two-Point Function: CFT Result}\label{CFTcomp} This section serves as a reminder for some basics on finite temperature conformal field theory. We would like to write the finite temperature two-point function of a two dimensional CFT in momentum space and compare with the result of the previous section. We start with the more familiar zero temperature correlation function in coordinate space. The zero temperature two-point function of a conformal field theory operator with conformal weights \begin{equation} h_L=\frac{1}{2}\left(\Delta-\frac{1}{2}\right)\ , \qquad h_R=\frac{1}{2}\left(\Delta+\frac{1}{2}\right)\ , \end{equation} takes (up to a constant) the following form in coordinate space \begin{equation}\label{retgreens} \langle {\cal O}(\vec x)\bar {\cal O}(\vec y)\rangle \sim\frac{\gamma^i(x^i-y^i)}{|\vec x-\vec y|^{(2\Delta+1)}} \ . \end{equation} More explicitly we can use the following representation of the gamma matrices \begin{equation}\label{gammas} \gamma^0=\left( \begin{array}{cc} 0&-1\\1&0 \end{array} \right)\ , \qquad \gamma^1=\left( \begin{array}{cc} 0&1\\1&0 \end{array} \right) \ , \end{equation} and the coordinates \begin{eqnarray} t^+_1&=&x^0+x^1 \ , \qquad t^-_1=x^0-x^1\ ,\nonumber\\ t^+_2&=&y^0+ y^1 \ , \qquad t^-_2=y^0-y^1 \ ,\nonumber \end{eqnarray} to rewrite (\ref{retgreens}) as \begin{equation} \langle {\bar \cal O}(t_1^+,t_1^-) {\cal O}(t_2^+,t_2^-)\rangle \sim \gamma^0 \left( \begin{array}{cc} \frac{1} { (t^+_{12})^{2h_R-1} (t^-_{12})^{2h_L+1} }&0\\ 0&\frac{1}{{(t^+_{12})}^{2h_R}{(t^-_{12})}^{2h_L}} \end{array} \right) \ , \end{equation} where we have introduced $t_{12}^+=t_1^+-t_2^+$ and similarly for $t_{12}^-$. The finite temperature correlation function is obtained by mapping the above result to a torus with circumferences $1/T_L$ and $1/ T_R$ \begin{equation}\label{resexpr} \langle {\cal O}(t_1^+,t_1^-)\bar {\cal O}(t_2^+,t_2^-)\rangle \sim\gamma^0\left( \begin{array}{cc} \left(\frac{\pi T_R}{\sinh(\pi T_R t^+_{12})}\right)^{2h_R-1} \left(\frac{\pi T_L}{\sinh(\pi T_L t^-_{12})}\right)^{2h_L+1} &0\\ 0&\left(\frac{\pi T_R}{\sinh(\pi T_R t^+_{12})}\right)^{2h_R} \left(\frac{\pi T_L}{\sinh(\pi T_L t^-_{12})}\right)^{2h_L} \end{array} \right). \end{equation} The formula (\ref{resexpr}) is the two-point function $\langle {\cal O}\bar {\cal O}\rangle$ for a non-chiral spinor operator ${\cal O}$. The AdS/CFT correspondence gives a correlator only between chiral/antichiral parts of the operator ${\cal O}$. \begin{eqnarray}\label{chiral} {\cal O}^{\pm}=\frac{1}{2}\left(1\pm\gamma^0\gamma^1\right){\cal O} \ ,\qquad \bar{\cal O}^{\pm}=\bar {\cal O}\frac{1}{2}\left(1\mp\gamma^0\gamma^1\right) \ . \end{eqnarray} Inserting (\ref{chiral}) into $\langle {\cal O}\bar {\cal O}\rangle$ with ${\cal O}=\left(\begin{array}{c}{\cal O}_1\\{\cal O}_2\end{array}\right)$ we see that the non-zero elements of (\ref{resexpr}) can be identified with \begin{equation} \langle {\cal O}^+\bar {\cal O}^+\rangle=\gamma^0\left( \begin{array}{cc} \langle{\cal O}_2 {\cal O}_2\rangle & 0\\ 0&0 \end{array} \right) \ ,\qquad \langle {\cal O}^-\bar {\cal O}^-\rangle=\gamma^0\left( \begin{array}{cc} 0&0\\ 0& \langle{\cal O}_1 {\cal O}_1\rangle \end{array} \right)\, . \end{equation} After analytic continuation $t^\pm\rightarrow it^\pm$, we Fourier transform the two-point function assuming only integer frequencies $\omega_E=2\pi k T$ by using \begin{equation} \int_{0}^{1/T}dt e^{i\omega_E t}\left(\frac{\pi T}{\sin(\pi Tt)}\right)^{2h}= \frac{(\pi T)^{2h-1}2^{2h}e^{i\omega_E/2T}\Gamma(1-2h)}{\Gamma\left(1-h+\frac{\omega_E}{2\pi T}\right)\Gamma\left(1-h-\frac{\omega_E}{2\pi T}\right)}\ , . \end{equation} Once we identify $k_L=-in_L , k_R=-in_R$ and $h_L=\beta , h_R=\beta+\frac{1}{2}, T_L=\frac{1}{2\pi}, T_R=T_R$ the two-point function $\langle {\cal O}^-{\bar{\cal O}}^-\rangle$ on the CFT side becomes\footnote{Here we have absorbed the $m\Omega_R$ appearing in eq. (5.13) of \cite{Bredberg:2009pv} into our definition of $n_R$.} \begin{equation} \langle {{\cal O}}^-{\bar{\cal O}}^-\rangle\sim T_R^{2\beta}\frac{1}{\beta+in_L}\frac{\Gamma(-2\beta)\Gamma(\beta-in_L)\Gamma(\frac{1}{2}+\beta-in_R)}{\Gamma(2\beta)\Gamma(-\beta-in_L)\Gamma(\frac{1}{2}-\beta-in_R)}\, . \end{equation} This matches the expression computed on the bulk side. \section{Conclusions} In this note we have calculated finite temperature two point correlations functions for fermionic fields in Kerr/CFT using the variational principle. Fermionic fields are particularly interesting because their correlation functions describe semi-realistic physical observables, such as the spectral function. To perform this calculation we have followed an approach well known for AdS/CFT. After analyzing the variational principle we have seen that a boundary term needs to be added to the Dirac action for the variational principle to be satisfied. This boundary term is responsible for generating non-trivial fermion correlation functions. Kerr/CFT is a duality in which a four-dimensional bulk geometry is dual to a two-dimensional conformal field theory. The fact that the conformal field theory lives in two dimensions less than the original bulk theory may sound at first surprising because from AdS/CFT we are used to the fact that the conformal field theory lives in one dimension less than the bulk rather than two. Fermions allow us to very nicely understand this aspect of Kerr/CFT because fermions, as opposed to scalars, are very sensitive to the number of space-time dimensions they live in. The boundary of the near-NHEK geometry is a three-dimensional theory described by the coordinates $t$,$\phi$ and $\theta$, while the radial coordinate approaches a large but finite cutoff $r_B$. Performing the calculation of the two point-function for two spinors living on the 3D boundary, we have seen that the theta dependence of the correlation function factors out. Therefore, the fermion correlation function effectively becomes that of a two dimensional relativistic conformal field theory. Our calculation was performed in Lorentzian signature rather than with an analytic continuation to Euclidean signature. We are not aware of a sensible Euclidean analytic continuation of the near-NHEK metric. For this reason we needed to impose some additional constraints on the two-point function that are well known from Lorentzian approaches to AdS/CFT \cite{Son:2002sd}. An interesting observation is that the gravitational action functional needed the inclusion of a boundary term that breaks Lorentz invariance and one may wonder if the boundary conformal field theory could be a non-relativistic theory once corrections to the leading terms are included. This would be similar in spirit to the recent discussion appearing in \cite{Laia:2011zn} in the context of AdS/CFT. Here a bulk theory in $AdS_4$ space-time is supplied with boundary conditions on the spinor field that break Lorentz invariance and it is argued that the dual conformal field theory is non-relativistic. Some recent discussion on the connection between Kerr/CFT and non-relativistic conformal field theories has recently appeared in \cite{ElShowk:2011cm}. It would be interesting to explore this connection in more detail. Finally, it would be interesting to extend our calculation to the Kerr-Newman geometry, as well as to other correlation functions involving e.g fermions and gauge fields. We hope to report on this in the future. \section*{Acknowledgments} We benefited from discussions with Katrin Becker, David Chow, Sera Cremonini, Umut Gursoy, Chris Pope, Daniel Robbins, Jan Troost as well as the correspondence with Aaron Amsel, Geoffrey Compere, Monica Guica, Tom Hartman, Gary Horowitz, Wei Song and Andrew Strominger. We would like to thank Tom Hartman and Andrew Strominger for comments on the manuscript. This work was supported by NSF under PHY-0505757, DMS-0854930 and the University of Texas A\&M.
1803.03627
\section{Introduction} Data depth measures play an important role when analyzing complex data sets, such as functional or high dimensional data. The main goal of depth measures is to provide a center-outer ordering of the data, generalizing the concept of median. Depth measures are also useful for describing different features of the underlying distribution of the data. Moreover, depth measures are powerful tools to deal with several inference problems such as, location and symmetry tests, classification, outlier detection, etc. Nonetheless, since one of their major characteristics is that the depth values decrease along any half-line ray from the center, they are not suitable for capturing characteristics of the distribution when data is multimodal. Hence, over the last few years, there have been introduced several definitions of local depth, with the aim of revealing the local features of the underlying distribution. The basic idea is to restrict a global depth measure to a neighborhood of each point of the space. In this way, a local depth measure should behave as a global depth measure with respect to the neighborhoods of the different points. Agostinelli and Romanazzi (2011) gave the first definition of local depth for the case of multivariate data. They extended the concepts of simplicial and half-space depth so as to allow recording the local space geometry near a given point. For simplicial depth, they consider only random simplices with sizes no greater than a certain threshold, while for half-space depth, the half-spaces are replaced by infinite slabs with finite width. Both definitions strongly rely on a tuning parameter, which retains a constant size neighborhood of every point of the space, something which plays an analogous role to that of bandwidth in the problem of density estimation. Desirable statistical theoretical properties are attained for the case of univariate absolutely continuous distributions. Paindaveine and Van Bever (2013) introduce a general procedure for multivariate data that allows converting any global depth into a local depth. The main idea of their definition is to study local environments. This means regarding the local depth as a global depth restricted to some neighborhood of the point of interest. They obtain strong consistency results of the sample version with its population counterpart. All the proposals provide a continuum between definitions of local and global depth. More recently, for the case of functional data, Agostinelli (2016) gives a definition of local depth extending the ideas introduced by Lopez-Pintado and Romo (2011) of a half-region space. This definition is also suitable for finite large dimensional datasets. Asymptotic results are obtained. Our goal is to give a general definition of local depth for random elements in a Banach space, extending the definition of global depth given by Cuevas and Fraiman (2009), where they introduce the Integrated Dual Depth (IDD). The main idea of IDD is based on combining one-dimensional projections and the notion of one-dimensional depth. Let $\Omega$ be a probability space and $\mathbb{E}$ a separable Banach space. Denote by $\mathbb{E}'$ the separable dual space. Let $X:\Omega\longrightarrow \mathbb{E}$ be a random element in $\mathbb{E}$ with distribution $P$ and $Q$ a probability measure in $\mathbb{E}'$ independient of $P.$ The IDD is defined as, \begin{equation}\label{IDD} IDD(x,P) = \int D(f(x),P_f) dQ(f), \end{equation} where $D$ is an univariate depth (for instance, simplicial or Tukey depth), $f \in \mathbb{E}',$ $x \in \mathbb{E}$ and $P_f$ is the univariate distribution of $f(X).$ In the present paper we define the Integrated Dual Local Depth (IDLD). The main idea is to replace the global depth measure in Equation (\ref{IDD}) by a local one dimensional depth measure following the definition given in (2013). We study how the classical properties, introduced by Zou and Serfling (2000), should be analyzed within the framework of local depth. We prove, under mild regularity conditions, that our proposal enjoys those properties. Moreover, uniform strong consistency results are exhibited for the definition of the empirical local depth of to the population counterpart, and also for the local depth regions. The main advantages of our proposals are its flexibility in dealing with general data and also its low computational cost, which enables it to work with high-dimensional data. As a natural application, we propose a clustering procedure based on local depths, and illustrate its performance with synthetic and real data, for different kind of data. The remainder of the paper is organized as follows. In Section 2 we define the integrated dual local depth, and study its basic properties. Section 3 is devoted to the asymptotic study of the proposed local depth measure. In Section 4 the local depth regions are defined and the consistency results are exhibited. A clustering procedure based on local depth regions is proposed in Section 5. Simulations and real data examples are given in Section 6. Some concluding remarks are given in Section 7. All the proofs appear in the Appendix. \section{General Framework and Definitions}\label{RandomLocal} In this section, we first review the concept of local depth for the univariate case. Then we define the Integrated Dual Local Depth, and we finally show that, under mild regularity assumptions, our proposal has good theoretical properties that correspond to those established in Paindavaine and Van Bever (2013). Let $P^{1}$ be a a probability measure on $\mathbb{R}$ and $x\in{\mathbb{R}}.$ Let $LD(x,P^{1})$ be the local depth measure of $x$ with respect to $P^{1}$, for example, the univariate simplicial depth, that is \begin{equation} LD_S^{\beta}(x,P^{1})=\frac{2}{\beta^2}\left(F^{1}(x+\lambda^{\beta}_x)-F^{1}(x) \right) \left(F^{1}(x)-F^{1}(x-\lambda^{\beta}_x)\right), \label{profsimplocalunidim} \end{equation} where $F^{1}$ is the cumulative distribution function of $P^{1}$ and $\lambda^{\beta}_x$ is the neighborhood width defined as follows. \begin{defn}\label{localityparamdef} Let $F$ be a univariate cumulative distribution function and $x \in \mathbb{R}.$ Then, for $\beta \in (0,1],$ we define the neighborhood width $\lambda^{\beta}_x$ by \begin{equation} \lambda^{\beta}_x=\inf{ \left\{\lambda>0 : F(x+\lambda)-F(x-\lambda) \geq \beta \right\}}, \label{localityparam} \end{equation} where $\beta$ is the locality level. \end{defn} \begin{remark} If $F$ is absolutely continuous, the infimum in Equation (\ref{localityparam}) is attained and hence, $$ \lambda^{\beta}_x=\min{ \left\{ \lambda>0 : F(x+\lambda)-F(x-\lambda) \geq \beta\right\} }. $$ Even more, it is clear that if $\beta_1 < \beta_2,$ then $\lambda^{\beta_1}_x < \lambda^{\beta_2}_x.$ \end{remark} The locality level $\beta$ is a tuning parameter that determines the centralness of the point $x$ of the space conditional to a given window around $x.$ If the value is high it approaches the regular value of the point depth whereas if it is low it will only describe the centralness in a small neighborhood of $x$. As $\beta$ tends to one, the local depth measure tends to the depth measure. We can also define, in an analogous way, the Tukey univariate local depth, \begin{equation*} LD_H^{\beta}(x,P^{1})=\frac{1}{\beta}min{ \left\{F^{1}(x+\lambda^{\beta}_x)-F^{1}(x),F^{1}(x)-F^{1}(x-\lambda^{\beta}_x)\right\} }. \end{equation*} In what follows, without loss of generality, we restrict our attention to the case of simplicial local depth, $LD_S^{\beta}$. \subsection{Integrated Dual Local Depth} \label{IDLD} Our aim in this section is to extend the IDD introduced by Cuevas and Fraiman (2009), to the local setting. The IDD is a depth measure defined for random elements in a general Banach space. The idea is to project the data according to random directions and compute the univariate depth measure of the projected unidimensional data. To obtain a global depth measure, these univariate depths measures are integrated. Under mild regularity conditions, the IDD satisfies the basic properties of depth measures described by Zou and Serfling (2000), and it is strongly consistent. In addition, it is important to remark that its computational cost is low, even in high dimensions, since it is based on the repeated computation of one dimensional projections. Let $\Omega$ be a probability space and $\mathbb{E}$ a separable Banach space, with $\mathbb{E}'$ its separable dual space. Let $X:\Omega\longrightarrow \mathbb{E}$ be a random element in $\mathbb{E}$ with distribution $P,$ $Q$ a probability measure in $\mathbb{E}'$ independent of $P$, $\beta \in (0,1],$ and $x \in \mathbb{E}$. We define the Integrated Dual Local Depth (IDLD), \begin{equation}\label{IDLD} IDLD^{\beta}(x,P) = \int LD_{S}^{\beta}(f(x),P_f) dQ(f), \end{equation} where $LD_{S}^{\beta}$ is the univariate local depth given in Equation (\ref{profsimplocalunidim}), $f \in \mathbb{E}',$ $x \in \mathbb{E}$ and $P_f$ is the univariate distribution of $f(X).$ As suggested by Cuevas and Fraiman, in the infinite dimensional setting $Q$ may be chosen to be a non-degenerate Gaussian measure and in the multivariate setting as a uniform distribution in the unitary sphere. With a slight abuse of notation, we write $F_f = F_{f(X)}$ for the cumulative distribution function of $f(X).$ Specifically, it reduces to $$F_{f(X)}(t) = P_{f(X)} \left( (- \infty,t] \right) = P(f(X) \leq t).$$ It is clear that the IDLD is well-defined, since it is bounded by $\frac{1}{2}$ and non-negative. Zou and Serfling (2000) established the general properties that depth measures should satisfy (\textbf{P. 1} - \textbf{P. 6}). Paindavaine and Van Bever (2013) extend those properties to the local depth framework. We describe the properties satisfied by IDLD. The first property deals with the invariance of the local depths. For the finite dimensional case, IDLD is independent of the coordinate system. This property is inherited from the IDD. Since IDLD is a generalization of IDD, which is not in general affine invariant (i.e., let $A$ be a non-singular linear transformation in $\mathbb{R}^p$ and $P_{AX}$ denote the distribution of $AX;$ then $D(Ax,P_{AX})$ is not equal to $D(x,P_{X})$), neither is IDLD. It is clear that IDLD is also invariant under translations and changes of scale. \begin{P1*} Let $\mathbb{E}$ by a finite dimensional Banach space, $X \in \mathbb{E}$ a random vector, $Q$ the Haar measure on the unit sphere of $\mathbb{E}'$ independent of $P_X.$ Let $A : E \rightarrow E$ be a linear transformation such that $|det(A)|=1$, $b \in \mathbb{E}$ and $\beta \in (0,1].$ Then $IDLD^{\beta}(Ax,P_{AX}) = IDLD^{\beta}(x,P_X).$ \label{InvarianzaAfin} \end{P1*} The proof appears in the Appendix A. \begin{remark} It is well known that the spatial median is not affine invariant, hence, transformation and retransformation methods have been designed to construct affine equivariant multivariate medians (Chakraborty, B. and Chaudhuri 1996, 1998)). IDLD can be modified following the ideas of Kot\'{i}k and Hlubinka (2017) to attain this property. \end{remark} Depth measures are powerful analytical tools, especially in cases where the random element enjoy symmetry properties. Local depths should locally (restricted to certain neighborhoods) inherit these properties. Hence we give an appropriate definition of local symmetry. \begin{defn} \label{bsimetrica} Let $X$ be a real random variable and $\beta \in (0,1].$ Then $X$ is said to be $\beta$-symmetric about $\theta$ if the cumulative function distribution $F$ satisfies \begin{equation} F \left( \theta + \lambda_{\theta}^{\beta'} \right) - F( \theta ) = \frac{\beta'}{2}, \mbox{ for every } 0<\beta' \leq \beta. \label{bsymm} \end{equation} A random element $X$ in a Banach space $\mathbb{E}$ is $\beta$-symmetric about $\theta$ if for every $f \in \mathbb{E}',$ $f(X)$ is $\beta$-symmetric. \end{defn} The notion of $\beta$-symmetry aims to locally capture the behavior of a unimodal random variable on a neighborhood of probability $\beta,$ about $\theta,$ the locally deepest point. Figure \ref{betaSimLindo}(a) and (b) exhibit a bimodal distribution, with modes at $\theta=1$ and $\theta=4.$ On the former, both modes are local symmetry points for $\beta=0.25$, while on the latter $\theta=4$ is a local symmetry point for $\beta=0.4$ but $\theta=1$ is not a local symmetry point for $\beta=0.4,$ the shaded area around $\theta=1$ is non-symmetrical. \begin{figure}[htbp] \centering \subfigure[$\theta=1$ and $\theta=4$ are local symmetry points with locality level $0.25$]{\includegraphics[width=50mm]{./bimodalunidimA}} \hspace{10mm} \subfigure[ $\theta=4$ is a local symmetry point with locality level $0.4,$ while $\theta=1$ it it not a local symmetry point at local level $0.4$]{\includegraphics[width=50mm]{./bimodalunidimB}} \caption{ Local symmetry points.} \label{betaSimLindo} \end{figure} An important property of depth measures is maximality at the center, meaning that if $P$ is symmetric about $\theta,$ then $D(x,P)$ attains its maximum value at that point. This property should be inherited by local depths if the distribution of $P$ is unimodal and convex. Local depths are relevant for detecting local features, for instance local centers, hence our aim is to extend the property of maximality at the center to each point $\theta,$ that is $\beta$-symmetry. \begin{P2*} Let $X \in \mathbb{E}$ be a random continuous element $\beta$-symmetric about $\theta.$ For $\beta \in (0,1]$ we have that \begin{equation} IDLD^{\beta'}(\theta,P_{X}) = \displaystyle \max_{x \in \mathbb{E}} IDLD^{\beta'}(x,P_X), \mbox{ for every } 0<\beta' \leq \beta. \end{equation} \label{bmaximality} \end{P2*} The proof appears in the Appendix A. Proposition \ref{bcsymmetry} bridges the definition of $\beta$-symmetry with the usual definition of $C$-symmetry (see Zhou and Serfling 2000). \begin{proposition} Let $X \in \mathbb{E}$ be a random continuous element $C$-symmetric about $\theta.$ Then $X$ is $\beta$-symmetric about $\theta$ for each $\beta \in (0,1].$ \label{bcsymmetry} \end{proposition} The proof appears in the Appendix A. Proposition \ref{x0betasim} describes the $\beta$-symmetry points of $X.$ \begin{proposition} \label{x0betasim} Let $X$ be a $\beta$-symmetric random element in $\mathbb{E}$ and $x_0 \in \mathbb{E}$ such that $LD(x_0,P) = \frac{1}{2}$ for every $0< \beta' \leq \beta.$ Then $x_0$ is a $\beta$-symmetry point. \end{proposition} The proof appears in the Appendix A. \textbf{P. 3} establishes that the local simplicial depth is monotone relative to the deepest point. Several auxiliary results that appear in the Appendix A must be stated before proving this property. \begin{P3*} \label{propP3} Let $\mathbb{E}$ be a separable Banach space and $\mathbb{E}'$ the corresponding dual separable space. Let $X$ be a random $C$-symmetric element about $\theta$ with probability measure $P.$ Let $Q$ be a probability measure in $\mathbb{E}'$ independent of $P$ and assume that for every $f \in \mathbb{E}',$ $f(X)$ has unimodal density function about $f(\theta)$ and fulfills \begin{equation} \label{desigualdadLema3Propiedad3} f_X(t) \geq 2 \frac{f_X(t+\lambda_{t}^{\beta})f_X(t-\lambda_{t}^{\beta})}{f_X(t+\lambda_{t}^{\beta})+f_X(t-\lambda_{t}^{\beta})} \ \forall t \in \mathbb{R}, \mbox{ } Q-a.s. \end{equation} Then, for every $x\in \mathbb{E}$ and $\beta \in (0,1],$ $$IDLD^{\beta}(x,P) \leq IDLD^{\beta}((1-t)\theta + xt,P) \ \ \ \mbox{ for every } t \in [0,1].$$ \end{P3*} The proof appears in the Appendix A. \begin{remark} It is easy to see that Inequality (\ref{desigualdadLema3Propiedad3}) holds for the standard normal distribution. Hence, the projections of a Gaussian process fulfill \textbf{P. 3.} \end{remark} In what follows, we show that IDLD vanishes at infinity, under mild regularity conditions. \begin{P4*} \label{vanishinf} Assume that $$\sup_{\|u\|=1} \left\{ f: f(u) \leq \epsilon \right\}=O(\epsilon),$$ where $O(\epsilon)$ is a function such that $\lim_{\epsilon\rightarrow0}O(\epsilon)=0$ $$ \displaystyle \lim_{||x|| \to + \infty} IDLD^{\beta}(x,P) = 0.$$ \end{P4*} The proof appears in the Appendix A. Proposition \textbf{P. 5} shows that $IDLD^{\beta}(x,P)$ is continuous as a function of $x.$ \begin{P5*} \label{P5} Let $X \in \mathbb{E}$ be a random continuous element and $\beta \in (0,1].$ Then $IDLD^{\beta}(\cdot,P): \mathbb{E} \rightarrow \mathbb{R}$ is continuous. \end{P5*} The proof appears in the Appendix A. Finally, we prove that $IDLD^{\beta}(x,P)$ is continuous as a functional of $P.$ \begin{P6*} For every $\beta \in (0,1],$ $IDLD^{\beta}(x,\dot): \mathbb{E} \rightarrow \mathbb{R}$ is continuous as a functional of $P.$ \end{P6*} The proof appears in the Appendix A. \section{Empirical Version and Asymptotic Results} In this section we introduce the empirical counterpart of the IDLD and give the main asymptotic results. First of all, recall the definition of Paindavaine and Van Bever (2013) of the empirical local unidimensional simplicial depth Let $ELD_{S}^{\beta(k)} (\cdot,F_n) : \mathbb{R} \longrightarrow \left[ 0,1/2 \right].$ Then \begin{equation*} ELD_{S}^{\beta(k)} (z,F_n) = \frac{2}{\beta(k)^2} \left[F_n(z + \lambda_{z,n}^{\beta(k)} ) - F_n(z) \right] \left[F_n(z) - F_n(z-\lambda_{z,n}^{\beta(k)} )\right], \end{equation*} where \begin{equation*} \lambda_{z,n}^{\beta(k)} = \inf_{\lambda>0} \{ F_n(z + \lambda_{z,n}^{\beta(k)} ) - F_n(z - \lambda_{z,n}^{ \beta(k)} ) = \beta(k) \}. \end{equation*} Remark \ref{propiedades chiquitas} entails the well-definedness of the empirical neighborhood width, $\lambda_{z,n}^{\beta(k)}.$ \begin{remark} \label{propiedades chiquitas} Let $\beta \in (0,1]$ and $X_1, \dots, X_n$ be a random sample of iid variables with distribution $F.$ Given $z \in \mathbb{R},$ put, for each $1 \leq j \leq n,$ $d_j(z) = |X_j - z|$ and let $d^{j}(z)$ denote the $j$th order statistics of $d_1(z), \dots, d_n(z).$ Let $k = [n \beta],$ where $[\cdot]$ is the integer part function. It is clear that $ \# \{ X_j \ : \ [z-d^{k}(z), z+d^{k}(z)] \} = k.$ Hence, $F_n(z+d^{k}(z)) - F_n(z-d^{k}(z)) = \frac{[n \beta]}{n} = \beta(k),$ and son the empirical neighborhood width is $\lambda_{z,n}^{\beta} = d^{k}(z).$ \end{remark} Then the empirical counterpart of IDLD is given as follows. \begin{defn} Let $\beta \in (0,1],$ $X: \Omega \to \mathbb{E}$ be a continuous random element and $X_{1}, \dots, X_{n}$ a random sample with the same distribution as $X.$ Let $k=[n \beta].$ For each $x \in \mathbb{E}$ and $f \in \mathbb{E}',$ define \begin{equation} \label{lambdaempirico} \lambda_{f(x),n}^{\beta(k)} = inf \left\{ \lambda > 0 : F_{f,n}(f(x) + \lambda) - F_{f,n}(f(x) - \lambda) = \frac{k}{n} \right\}. \end{equation} Let $\beta(k) = \frac{k}{n}.$ The empirical version of IDLD of locality level $\beta(k)$ is \begin{equation} \label{ELIDD} EIDLD^{\beta(k)}(x,P) = IDLD^{\beta(k)}(x,P_n). \end{equation} \end{defn} In order to establish the uniform strong convergence of the one dimensional simplicial local depth, the following lemmas must be proved in advance. \begin{lemma} Let $X$ be an absolutely continuous random variable with distribution $F.$ Suppose given $X_1, \dots, X_n$ iid random variables, also with distribution $F$. Let $x_p = F^{-1}(p)$ be the quantile $p \in (0,1)$ from $F$ and $Q_{p,n}$ the quantile $p$ from $F_n, $ which is the empirical cumulative distribution function of $X_1, \dots, X_n.$ Then, \begin{itemize} \item[(i)] $Q_{p,n} = X_{ \left([np] +1 \right) }.$ \item[(ii)] $| F_n(Q_{p,n}) - F(x_p) | \leq \frac{1}{n} \ \forall \ p \in (0,1). $ \item[(iii)] $ | F(Q_{p,n}) - F(x_p) | \leq ||F_n - F ||_{\infty} + \frac{1}{n}.$ \end{itemize} \end{lemma} \begin{lemma} \label{desigualdadLDS} Let $X_1, \dots, X_n$ be a real random sample with cumulative distribution function $F.$ Let $\beta \in (0,1]$ and $z \in \mathbb{R}.$ Then, \begin{equation} \left| ELD_{S}^{\beta(k)}(z,F_n) - LD_{S}^{\beta}(z,F) \right| \leq \frac{1}{2} \left( 1 - \left( \frac{\beta(k)}{\beta} \right)^2 \right) + \frac{2}{\beta^2} \left(\frac{8}{n} + 4 ||F_n - F||_{\infty} \right) \label{desguniv} \end{equation} \end{lemma} The proof appears in the Appendix B. The theorems below establish the uniform strong convergence of the empirical counterpart of the univariate simplicial local depth to the population counterpart. \begin{theorem} Let $\mathbb{E}$ be a separable Banach space with a dual separable space $\mathbb{E}'.$ Suppose given $X_1, \dots, X_n$ a random sample of elements on $\mathbb{E}$ with probability measure $P$ and $\beta \in (0,1].$ Then, we have \begin{enumerate}[(a)] \item \begin{equation} E \left( \sup_{x \in \mathbb{E}} \Big| ELD_S^{\beta(k)}(f(x),P_{n,f}) - LD_S^{\beta}(f(x),P_f) \Big| \right) \xrightarrow[n \to + \infty]{} 0 \ \mbox{ for every } \ f\in \mathbb{E}'. \end{equation} \item \begin{equation} E \left( \sup_{x \in \mathbb{E}} \Big| EIDLD^{\beta(k)}(x,P_n) - IDLD^{\beta}(x,P) \Big| \right) \xrightarrow[n \to + \infty]{} 0. \end{equation} \end{enumerate} \end{theorem} The proof appears in the Appendix B. \begin{theorem} \label{consistenciactp} Let $X$ be a random element on $\mathbb{E}$ a separable Banach space with associated probability measure $P$ such that $E(f(X)^2) < +\infty \ \mbox{ for every } \ f \in \mathbb{E}'.$ Let $X_1, \dots, X_n$ be a random sample following the same distribution as $X$ and $\beta \in (0,1].$ Then, \begin{equation*} P \left( \sup_{x \in \mathbb{E}} \Big| EIDLD^{\beta(k)}(x,P_n) - IDLD^{\beta}(x,P) \Big| \xrightarrow[n \to +\infty]{} 0 \right) = 1. \end{equation*} \end{theorem} The proof appears in the Appendix B. \section{Local Depth Regions} In this section we define the \textit{$\alpha$ local depth inner region at locality level $\beta,$} which will be instrumental in making applications of local depth functions. Ideally, these central regions will be invariant of the coordinate system and nested. We also study, under mild regularity conditions, the asymptotic behavior. Denote by $LD^{\beta}$ a local depth measure and $ELD^{\beta}$ its empirical counterpart. In particular, one can consider the integrated dual local depth defined in Section \ref{IDLD}. \begin{defn} Let $\mathbb{E}$ be a separable Banach space, let $X: \Omega \rightarrow \mathbb{E}$ a random element with associated probability measure $P.$ Fix $\beta \in (0,1],$ a locality level, and $\alpha \in [0,\frac{1}{2}].$ The \textit{local inner region at locality level} $\beta$ \textit{of level} $\alpha$ is defined to be \begin{equation} \label{ldregion} R_{\beta}^{\alpha} = \left \{ x \in \mathbb{E}: \ LD^{\beta}(x,P) \leq \alpha \right \}. \end{equation} \end{defn} Let $X_1, \dots, X_n$ be a random sample of elements on $\mathbb{E}.$ Then the empirical counterpart of $R_{\beta}^{\alpha}$ is $$ R_{n}^{\alpha} = R_{n,\beta}^{\alpha} = \left \{ x \in \mathbb{E}: \ ELD^{\beta}(x,P_n) \leq \alpha \right \}. $$ Throughout this section the locality level $\beta$ will remain fixed, hence we write $R^{\alpha}$ (respectively, $R_n^{\alpha}$) for $ R_{\beta}^{\alpha}$ (respectively. $R_{n,\beta}^{\alpha}$) when no ambiguity is possible. \begin{remark} \label{PropiedadesRegionProfundidad1} If $\mathbb{E}$ is a finite dimensional space, then $R^{\alpha}$ is invariant under orthogonal transformations. \end{remark} \begin{remark} \label{PropiedadesRegionProfundidad2} If $\alpha_1 \leq \alpha_2,$ then $R_{\beta}^{\alpha_2} \subset R_{\beta}^{\alpha_1}.$ \end{remark} Theorem \ref{consistRalfa} shows that the empirical $\alpha$ local depth inner region at locality level $\beta$ is strongly consistent with its corresponding population counterpart, under mild regularity conditions. \begin{theorem} \label{consistRalfa} Let $\mathbb{E}$ be a separable Banach space and let $X: \Omega \rightarrow \mathbb{E}$ be a random element with associated probability measure $P.$ Assume that \begin{enumerate}[a)] \item $ \displaystyle LD^{\beta}(x,P) \xrightarrow[ \| x \| \to +\infty]{} 0.$ \item $ \displaystyle \sup_{x \in \mathbb{E}} \left| ELD^{\beta}(x,P) - LD^{\beta}(x,P) \right| \xrightarrow[n \to +\infty]{} 0$ a.s. \end{enumerate} Then, for every $\epsilon > 0,$ $0 < \delta < \epsilon,$ $0 < \alpha$ and sequence $\alpha_n \rightarrow \alpha$: \begin{enumerate}[(I)] \item There exists an $n_0 \in \mathbb{N}$ such that $R^{\alpha + \epsilon} \subset R_{n}^{\alpha_n + \delta} \subset R_{n}^{\alpha_n} \subset R_{n}^{\alpha_{n} - \delta} \subset R^{\alpha - \epsilon}.$ \item If $P \left( x \in \mathbb{E}: \ LD_{\beta}(x) = \alpha \right) = 0,$ then $R_{n}^{\alpha_n} \xrightarrow[n \to +\infty]{} R^{\alpha}$ a.s. \end{enumerate} \end{theorem} The proof appears in the Appendix C. \section{A Local-Depth Based Clustering Procedure}\label{LDC} In this section we introduce a centroid-based clustering procedure based on local depths (LDC). We propose the two-stage partition method described below. The R routines needed to compute the IDLD appear in Appendix D. Let $X$ be a random element in a separable Banach space $\mathbb{E},$ with distribution $P.$ \begin{itemize} \item[Setp 1:] Core clustering region. \begin{itemize} \item[a)] Consider the $\alpha$ local depth inner region at locality level $\beta,$ $R_{\beta}^{\alpha},$ defined in Equation (\ref{ldregion}). \item[b)]Consider a partition of $R_{\beta}^{\alpha}$ into $k$ clusters, $\tilde{C}_1^{\alpha}, \dots,\tilde{C}_k^{\alpha}, $ such that $R_{\beta}^{\alpha}= \bigcup_{i=1}^k \tilde{C}_i^{\alpha},$ and $P(\tilde{C}_i^{\alpha} \cap \tilde{C}_j^{\alpha})=0,$ for $i \neq j.$ \end{itemize} \item[Step 2:] Final clustering allocation. Based on the initial clustering configuration for the points in $R_{\beta}^{\alpha},$ proceed to the final clustering allocation following a minimum distance rule, i.e. $$C_i^{\alpha}=\{ x \in \mathbb{E}: d(x, \tilde{C}_i^{\alpha}) \leq d(x, \tilde{C}_j^{\alpha}) \mbox{ for every, } j \neq i \},$$ where $d(x, \tilde{C}_j^{\alpha})=\inf_{y \in \tilde{C}_j^{\alpha}} d(x,y).$ \end{itemize} The main idea of the proposal is to determine the center of the cluster as a region of the space rather than a single point, even though, it is well known that there is no a ``one size fits all'' clustering procedure, and that the election of the clustering procedure relies heavily on the underlying distribution. Our main idea is to have centers with a flexible shape allowing a better capturing of the cluster distribution. Typically, center-based clustering proposals have very good performance under spherical distributions. More flexibility in the shape of the central region should be reflected in a better performance at detecting the true clustering structure under a wide range of distributions, including elliptical distributions. In addition, since depth measures have a close relation with robustness, the core clustering regions are expected to be resistant to the presence of outliers. In \textbf{Step 1} part b), any clustering procedure can be considered; for the sake of simplicity in what follows, we use the classical $k$-means algorithm. If the number of clusters, $k,$ is not given beforehand, it can be estimated using any procedure existing in the literature. The empirical counterpart of the proposal is given in a straightforward way, employing a classical plug-in procedure. Let $X_1,\dots,X_n$ be iid observations in $\mathbb{E},$ a separable Banach space, with a $k$ cluster structure. Denote by $R_n^{\alpha}$ the $\alpha$ empirical local depth inner region at locality level $\beta,$ and let $ \tilde{C}_{n,1}^{\alpha}, \dots,\tilde{C}_{n,k}^{\alpha}, $ denote the initial partition obtained in \textbf{Step 1} part b). The final allocation is given by, $$C_{n,i}^{\alpha}=\{x \in \mathbb{E} : d(x, \tilde{C}_{n,i}^{\alpha}) \leq d(x, \tilde{C}_{n,j}^{\alpha}) \mbox{ for every, } j \neq i \},$$ where $d(x, \tilde{C}_{n,j}^{\alpha})=min_{y \in \tilde{C}_{n,j}^{\alpha}} d(x,y).$ \begin{remark} The core observations of the clustering procedure can be selected considering any local depth, as long as the procedure is consistent. \end{remark} \section{Simulations and Real Data Examples} \label{simul} In this section we numerically analyze the performance of the clustering procedure introduced in Section \ref{LDC}. Simulations have been done both in the finite and infinite dimensional settings. In addition, real data examples are analyzed. The LDC procedure is implemented using not only the IDLD but also any other proposal available in the literature. \subsection{Simulations: Multivariate data} The main aim of this section is to evaluate the performance of our clustering proposal under a wide range of clustering configurations. Specifically, we will analyze the case where the data presents sparseness, outliers or the sizes of the groups is not balanced. For this end, we will work under fourteen different scenarios. The original variable distribution has been proposed by Witten and Tibshirani (2010) and extended by Kondo et al. (2016). Our proposal will be challenged by several well known clustering procedures, which are briefly described. In all the cases the data has a three group structure, each group has 300 observations. The data is generated as follows. Model 1: The data are spherically generated, following $N(\mu_i, \Sigma),$ for $i=1,2,3,$ with centers $(-3,-3,0),(0,0,0),(3,3,0), $ and the covariance matrix is the identity matrix. Model 2: The data are ellipsoidally generated, following $N(\mu_i, \Sigma),$ for $i=1,2,3,$ with centers $(-3,-3,0),(0,0,0),(3,3,0), $ and the covariance matrix $\Sigma=diag(3,0.25,1)$. In these two models, the first two variables are informative while the last one is noise. Model 3, (respectively, Model 4) are five dimensional datasets. The first three variables have the same distribution as Model 1 (respectively, Model 2), the remaining variables are two independent noisy variables, with distribution $N(0,1).$ We then consider two different contamination settings. In each of them we add five outliers, we only replace one coordinate by a variable generated with uniform distribution in the interval $[25,25.01]$. In the first setting, for Models 5-8, the contamination is done by replacing the first coordinate (which is an informative variable) of the first five observations of the first cluster, while the rest of the distribution remains as in Models 1-4. In Models 9-12, the contamination has the same distribution but is situated in the last coordinate, which is a non-informative variable. The two remaining models, 13 and 14, have clusters with unbalanced sizes, the same distributions are followed as in Models 1 and 2, but instead of having $100$ observations each cluster, the first cluster has $60\%$ of the observations, while the two remaining clusters have $20\%$ each. The benchmark clustering procedures are: \begin{itemize} \item The $k$-means algorithm, we consider ten random initializations. \item The sparse $k$-means clustering procedure (SKM), introduce by Witten and Tibshirani (2010). The tuning parameter, $L_1,$ bound is chosen, as suggested in the literature ($s=3, 7$), and five random initializations are considered. \item The robust and sparse $k$-means clustering procedure (RSKM), proposed by Kondo et al. (2016). Two tuning parameters must be set. Both of them have been set as suggested in \cite{KSZ16}: the parameter that corresponds to the $L_1$ norm is $L_1=4$ and the trimming proportion is $0.1.$ \item The model-based clustering procedure (MCLUST) proposed by Fraley and Raftery (2002,2009), designed to cluster mixtures of $G$ normals distributions. \end{itemize} SKM is designed to cluster observations in a high dimensional setting, with a low proportion of clustering informative variables. RSKM is a robust extension of SKM. The LDC introduced in Section \ref{LDC} has been implemented using three definitions of local depth, every case the parameters where chosen following Hennig \cite{H07}, and the results were very stable. \begin{itemize} \item The simplicial local depth procedure (LDCS) introduced by Agostinelli and Romanazzi (2011). We used the R package \textit{localdepth,} the threshold value for the evaluation of the local depth, $\tau,$ was calculated with the \textit{quantile.localdepth} function, as suggested in the same R package, and the quantile order of the statistic was set to $probs=0.1.$ \item Local version of depth at locality level $\beta$ (LDCPV) according to proposals of Paindaveine and Van Bever (2013), using the R package \textit{DepthProc}. We set $\beta=0.2.$ \item Integrated dual local depth at locality level $\beta$ (LDCI) introduced in Section \ref{IDLD}. As with the LDCPV we set $\beta=0.2$ and set the number of random projections $N=50,$ with standard normal distribution. Routines are available in the Appendix E of the Supplementary Material. \end{itemize} The parameter $\alpha$ represents the proportion of data which will contain the core regions of the clusters, if this value is very small the procedure will have a very similar behavior to $k$-means, not being able to capture the shape of the clusters. If it takes high values, the core regions will have observations with moderate local depth, that can lead to errors in the assignments. For these reasons we suggest taking values between $0.15$ and $0.45$. To set this parameter we perform an analysis of the sensitivity, following the resampling ideas proposed by Hennig \cite{H07}, from them we could see that in all cases the method is stable, as in most cases $\alpha = 0.4$ showed slightly better performance we settled this value throughout the study. We performed $M = 500$ replicates for each model. There is no commonly accepted criterion for evaluating the performance of a clustering procedure. Nonetheless, since we are dealing with synthetic datasets, we know the real label of each observation, hence in these cases we may use the Correct Classification Rate (CCR). We denote the original clusters by $k = 1, \dots , K$. Let $y_1, \dots, y_n$ be the group label of each observation, and $\widehat{y}_1, \dots, \widehat{y}_n$ the class label assigned by the clustering algorithm. Let $\widetilde{\Sigma}$ be the set of permutations over ${1,\dots , K}$. Then the CCR is given by: \begin{equation} \label{CCR} CCR= \min_{\sigma \in \widetilde{\Sigma}} \frac{1}{n} \sum_{i=1}^n \mathcal{I}_{\{y_i \neq \sigma(\widehat{y}_i)\}}. \end{equation} The results of the simulation are exhibited in Table \ref{simulTS}. As expected, all the clustering procedures have an exceptional performance for Models 1 and 3, where all the clusters are spherical without outliers. For Models 2 and 4, where the clusters have an elliptical distribution, MCLUST has an outstanding performance and it is clear that LDC (with any local depth measure) performs better than the other three alternatives. In Models 5 to 12, since $k$-means, SKM and MCLUST are nonrobust procedures, they fail in the classification of the observation, typically the five outliers make up one group and the cluster with mean $(0,\dots,0)$ is usually split into two clusters. LDC and RSKM are based on more robust clustering criteria, hence both methods have a good performance; RSKM seems to perform better under spherical distributions while LDC performs better under elliptical distribution. It is clear, that LDC has a good performance for Models 1 to 12, and that the choice of the local depth is not crucial. Nonetheless, when cluster sizes are unbalanced the only criteria able to correctly detect the cluster structure are MCLUST and LDC considering the integrated dual local depth. It is clear that LDC combined with the other two proposals of local depths is not able to detect the center of the clusters. The remainder of the clustering procedures had a good performance on the spherical case but failed on the elliptical case. In summary, LDCI is the only clustering procedure versatile enough to detect clusters under adverse situations (sparse data, outliers and unbalanced cluster size). \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Mean CCR for each clustering criterion and distribution configuration} \label{simulTS} \centering \begin{tabular}{c|cccccccc} \textbf{Model} & \textbf{$k$-means} & \textbf{SKM} & \textbf{RSKM} & \textbf{MCLUST} & \textbf{LDCS} & \textbf{LDCPV} & \textbf{LDCI} & \\ \hline \textbf{1}& $0.98$ & $0.98$ & $0.98$ & $0.98$ & $0.96$ & $0.95$ & $0.97$ \\ \textbf{2}& $0.87$ & $0.80$ & $0.86$ & $0.99$ & $0.91$ & $0.87$ & $0.91$ \\ \textbf{3}& $0.98$ & $0.98$ & $0.98$ & $0.98$ & $0.96$ & $0.96$ & $0.97$ \\ \textbf{4}& $0.87$ & $0.80$ & $0.85$ & $0.99$ & $0.89$ & $0.90$ & $0.90$ \\ \textbf{5}& $0.66$ & $0.70$ & $0.96$ & $0.65$ & $0.95$ & $0.92$ & $0.95$ \\ \textbf{6}& $0.65$ & $0.62$ & $0.84$ & $0.66$ & $0.90$ & $0.85$ & $0.87$ \\ \textbf{7}& $0.67$ & $0.70$ & $0.96$ & $0.65$ & $0.94$ & $0.94$ & $0.95$\\ \textbf{8}& $0.65$ & $0.62$ & $0.84$ & $0.66$ & $0.88$ & $0.89$ & $0.87$ \\ \textbf{9}& $0.65$ & $0.68$ & $0.98$ & $0.65$ & $0.95$ & $0.94$ & $0.95$ \\ \textbf{10}& $0.65$ & $0.65$ & $0.86$ & $0.66$ & $0.91$ & $0.84$ & $0.89$ \\ \textbf{11}& $0.65$ & $0.67$ & $0.98$ & $0.65$ & $0.95$ & $0.86$ & $0.96$ \\ \textbf{12}& $0.65$ & $0.66$ & $0.85$ & $0.66$ & $0.88$ & $0.95$ & $0.90$ \\ \textbf{13}& $0.97$ & $0.98$ & $0.98$ & $0.97$ & $0.54$ & $0.46$ & $0.96$ \\ \textbf{14}& $0.74$ & $0.70$ & $0.69$ & $0.98$ & $0.52$ & $0.43$ & $0.82$ \\ \end{tabular} \end{table} In what follows we compare the computational times for the three local depths measures. The simulation were based on data generated according to Model 3, but instead of having three noise variables, we added $p-2,$ ($p=5,35,65$) normal independent noise variables centered at the origin with unit standard deviation. Also we considered different sample sizes, $n=300, 2100, 3900$ and $5700.$ For ILDL $50,$ random directions were generated. Since the computational time increases exponentially as the dimension increases, we only performed $M=50$ replicates under each scenario. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Mean computer time for LDS, LDPV and IDLD.} \label{compmultiv} \centering \begin{tabular}{cc|cccc} \textbf{\textbf{p}}& & \textbf{n} & & & \\ & & $300$ & $2100$ & $3900$ & $5700$ \\ \hline \textbf{5} & LDS & $0.785$ & $38.27$ & $131.65$ & $280.66$ \\ & LDPV & $4.236$ & $100.08$ & $292.67$ & $624.91$ \\ & IDLD & $0.397$ & $20.74$ & $73.74$ & $160.43 $ \\ \hline \textbf{35}& LDS & $1.770$ & $86.88$ & $299.03$ & $638.38$ \\ & LDPV & $7.840$ & $200.94$ & $629.07$ & $1363.97$ \\ & IDLD & $0.402$ & $20.68$ & $74.41$ & $160.29 $ \\ \hline \textbf{65}& LDS & $3.788$ & $184.92$ & $641.01$ & $1368.31$ \\ & LDPV & $10.934$ & $288.79$ & $982.79$ & $2094.89$ \\ & IDLD & $0.406$ & $20.66$ & $75.07$ & $164.40$ \\ \end{tabular} \end{table} From Table \ref{compmultiv} we can see that in every case IDLD is the fastest procedure, moreover it is not affected by the dimension of the dataset, while the computational efforts required by LDS and LDPV grow dramatically as $p$ increases. LDPV is overall the slowest procedure. Even though all the procedures demand more time as the sample size grows, IDLD is the one with the least pronounced growth rate. \subsection{Simulations: Multivariate functional data} In this section we present the results of a simulation study for multivariate functional data; for such a multivariate setting, there are scarcely any clustering procedures. We will replicate the simulation done by Schumtz et al. (2017). They present three different scenarios. In every case, the data es bivariate. Model A. Three groups, each of them with $100$ observations. \begin{table*}[!ht] \centering \begin{tabular}{cc} Group 1: & $X_1(t)= \sin((10+a_1)t)+(1+a_1)+e_1(t)$ \\ & $X_2(t)= \sin((5+a_2)t)+(0.5+a_2)+e_2(t)$ \\ Group 2: & $X_1(t)= \sin((5+a_2)t)+(0.5+a_2)+e_2(t)$ \\ & $X_2(t)= \sin((15+a_1)t)+(1+a_1)+e_1(t)$ \\ Group 3: & $X_1(t)= \sin((15+a_1)t)+(1+a_1)+e_1(t)$ \\ & $X_2(t)= \sin((10+a_1)t)+(1+a_1)+e_1(t).$ \\ \end{tabular} \end{table*} Here $a_1 \sim N(0,0.2),$ $a_2 \sim N(0,0.3),$ $e_1(t)$ is white noise with variance $|\frac{a_1}{2}|,$ and $e_2(t)$ is white noise with variance $|\frac{a_2}{2}|.$ The curves are generated for $101$ equidistant points in the interval $[0,1].$ Model B. Four groups, each of them with $250$ observations. \begin{table*}[!ht] \centering \begin{tabular}{cc} Group 1: & $X_1(t)= U+(1-U)h_1(t)+e(t)$ \\ & $X_2(t)= U+(0.5-U)h_1(t)+e(t)$ \\ Group 2: & $X_1(t)= U+(1-U)h_2(t)+e(t)$ \\ & $X_2(t)= U+(0.5-U)h_2(t)+e(t)$ \\ Group 3: & $X_1(t)= U+(0.5-U)h_1(t)+e(t)$ \\ & $X_2(t)= U+(1-U)h_2(t)+e(t)$ \\ Group 4: & $X_1(t)= U+(0.5-U)h_2(t)+e(t)$ \\ & $X_2(t)= U+(1-U)h_1(t)+e(t).$ \\ \end{tabular} \end{table*} Here $t \in [ 1,21],$ $U \sim U(0,0.1),$ and $e_1(t)$ is white noise independent of $U$ with variance $0.25.$ The functions are $h_1(t)=(6-|t-7|)_+$ and $h_1(t)=(6-|t-15|)_+,$ where $(\cdot)_+$ means the positive part. The curves are generated at $101$ equidistant points in the interval $[0,1].$ Model C. Four groups, each of them with $250$ observations. \begin{table*}[!ht] \centering \begin{tabular}{cc} Group 1: & $X_1(t)= U+(1-U)h_1(t)+e(t)$ \\ & $X_2(t)= U+(0.5-U)h_1(t)+e(t)$ \\ Group 2: & $X_1(t)= U+(1-U)h_2(t)+e(t)$ \\ & $X_2(t)= U+(0.5-U)h_2(t)+e(t)$ \\ Group 3: & $X_1(t)= U+(1-U)h_1(t)+e(t)$ \\ & $X_2(t)= U+(1-U)h_1(t)+e(t)$ \\ Group 4: & $X_1(t)= U+(0.5-U)h_2(t)+e(t)$ \\ & $X_2(t)= U+(0.5-U)h_1(t)+e(t).$ \\ \end{tabular} \end{table*} Here, $t \in [1,21],$ while $U, e(t), h_1$ and $h_2$ are defined as before. The curves are generated at $101$ equidistant points in the interval $[0,1].$ As in the original paper, the estimated partition will be compared with the theoretical one via the Adjusted Rand Index (ARI), from the function \textit{AdjustedRandIndex} from the mclust R package. For each model, 50 replications where carried out. Schmutz et al. (2017), report the ARI for settings settings of their proposal, and also for \textit{funclust} (2014) as well as $kmeans$-$d_1$ and $kmeans$-$d_2,$ which are two proposals introduced by Ieva et al. (2013). In Table \ref{simulmultiFDari} we present the maximum value of the ARI for Schmutz et al. and the remainder of the procedures. It is clear that LDCI outperforms by far the rest of the proposals, since it does not misclassify any observation throughout the simulation study. \begin{table}[!ht] \renewcommand{\arraystretch}{1.3} \label{simulmultiFDari} \centering \caption{ARI for different clustering procedures for multivariate functional data.} \begin{tabular}{c|ccc} & \textit{Model A} & \textit{Model B} & \textit{Model C} \\ \hline LDCI & $1$ & $1$ & $1$\\ Best Schmutz & $0.96$ & $0.92$ & $0.80$\\ funclust & $0.23$ & $0.36$ & $0.45$ \\ $kmeans-d_1$ & $0.90$ & $0.37$ & $0.32$ \\ $kmeans-d_2$ & $0.90$ & $0.37$ & $0.32$ \\ \end{tabular} \end{table} Computational results functional data, considering synthetic and real examples appear in Appendix D. \subsection{Real data examples for mixed-type datasets} Our aim in this Section is to analyze data set AEMET, from the R library \textit{fda.esc}. This dataset contains series of daily summaries of $73$ spanish weather stations selected for the period 1980-2009. We will analyze the clustering structure of the dataset conformed by the variables: mean daily wind speed during between 1980 and 2009 (which is a functional variable) and geographic information of each station: altitud, latitud and height, which are real variables. Analyzing these variables together is relevant given that height influences in the intensity of the winds. Although the sensors are located at the same height, it is possible that phenomena related to the climate of the region generate deformations in the curves given by the intensity of the wind. To apply the LDC clustering criterion, we must be precise in the definition of the IDLD in data sets that have these characteristics. Our proposal is to project the functional variable as we have done in Section 6.2 and the multivariate variables as in Section 6.1. Then, we join those two projections with equal weight, and compute the IDLD. We look for two clusters, the parameters of the clustering procedure are $\alpha=0.15$ and $\beta=0.3.$ These parameters have been settled upon visual considerations of the dataset. After performing the clustering analysis we obtained two groups, one of them corresponds to the coastal stations (orange stations) while the other one corresponds to the continental ones (red stations), as it can be seen in Figure \ref{ClusterAllSpainSvarc}. This classification corresponds to the well-known fact that the wind speed is more constant over the coastal areas. An example can be found in the use made of wind farms. \begin{figure}[h] \centering \includegraphics[width=0.6 \textwidth]{ClustersAllSpain.pdf} \caption{Geographical position of each meteorological station. The stations that belong to the coastal group are in orange, while the ones that belong to the continental stations appear in red.} \label{ClusterAllSpainSvarc} \end{figure} Finally, to understand the conformation of the groups in an integral way, it is convenient to analyze the core regions for the mean speed of the wind and the height of the stations. It can be seen that the stations corresponding to the core continental region are at higher altitudes, suffer more variability in wind intensity, as shown in the left and right panels of Figure \ref{KernelRawYAlturaSvarc}. However, the coast stations that they are in lower zones have less daily variability and apparently the wind has greater intensity, as can be seen in the central and right panels of Figure \ref{KernelRawYAlturaSvarc}. \begin{figure}[h] \centering \includegraphics[width=4in]{KernelRawYAltura.pdf} \caption{Left: The red curves correspond to the core observations of the mean wind speed for the coast cluster. Center: The yellow curves are the core observations of the mean wind speed for the continental cluster. Right: Grouping conformation for the height, coast cluster in red and continental cluster in yellow. } \label{KernelRawYAlturaSvarc} \end{figure} \section{Final remarks} In this paper, we introduced a local depth measure, IDLD, suitable for data in a general Banach space with low computational burden. It is an exploratory data analysis tool, which can be used in any statistical procedure that seeks to study local phenomena. From the theoretical perspective, local depths are expected to be generalizations of a global depth measure. Our proposal has this property. Additionally, they are expected to inherit good properties from global depths: this point has been overlooked for local depths. Strong consistency results for the local depth and local depth regions have been proved. From the practical point of view, we explored the use of local depth measures in cluster analysis, introducing a simple clustering procedure. The first stage is to split into $k$ groups the $\alpha$ local inner region. The points are assigned to the closest group of the $\alpha$ local inner region. The flexibility of shape of the groups made up by the points in the $\alpha$ local inner region, produces a flexibility of the shapes in the groupings of the entire space. Computational experiments reflect this fact by showing an extraordinary performance under a wide range of clustering configurations.
1101.1654
\subsection*{Acknowledgment} #1} \def\floatpagefraction{.95} \def\topfraction{.95} \def\bottomfraction{.95} \def\textfraction{.05} \def\dblfloatpagefraction{.95} \def\dbltopfraction{.95} \newcommand{\EFigure}[2]{\begin{figure} \centering \framebox[85mm]{\epsfxsize=80mm\epsfbox{#1}} \caption{\protect\small #2}\medskip\hrule \end{figure}} \newcommand{\REFigure}[2]{\begin{figure} \centering \framebox[85mm]{\epsfysize=80mm\rotate[r]{\epsfbox{#1}}} \caption{\protect\small #2}\medskip\hrule \end{figure}} \newcommand{\WEFigure}[2]{\begin{figure*} \centering \framebox[178mm]{\epsfxsize=170mm\epsfbox{#1}} \caption{\protect\small #2}\medskip\hrule \end{figure*}} \def\Jl#1#2{#1 {\bf #2},\ } \def\ApJ#1 {\Jl{Astroph. J.}{#1}} \def\CQG#1 {\Jl{Class. Quantum Grav.}{#1}} \def\DAN#1 {\Jl{Dokl. AN SSSR}{#1}} \def\GC#1 {\Jl{Grav. Cosmol.}{#1}} \def\GRG#1 {\Jl{Gen. Rel. Grav.}{#1}} \def\JETF#1 {\Jl{Zh. Eksp. Teor. Fiz.}{#1}} \def\JETP#1 {\Jl{Sov. Phys. JETP}{#1}} \def\JHEP#1 {\Jl{JHEP}{#1}} \def\JMP#1 {\Jl{J. Math. Phys.}{#1}} \def\NPB#1 {\Jl{Nucl. Phys. B}{#1}} \def\NP#1 {\Jl{Nucl. Phys.}{#1}} \def\PLA#1 {\Jl{Phys. Lett. A}{#1}} \def\PLB#1 {\Jl{Phys. Lett. B}{#1}} \def\PRD#1 {\Jl{Phys. Rev. D}{#1}} \def\PRL#1 {\Jl{Phys. Rev. Lett.}{#1}} \def\al{&\nhq} \def\lal{&&\nqq {}} \def\eq{Eq.\,} \def\eqs{Eqs.\,} \def\beq{\begin{equation}} \def\eeq{\end{equation}} \def\bear{\begin{eqnarray}} \def\bearr{\begin{eqnarray} \lal} \def\ear{\end{eqnarray}} \def\earn{\nonumber \end{eqnarray}} \def\nn{\nonumber\\ {}} \def\nnv{\nonumber\\[5pt] {}} \def\nnn{\nonumber\\ \lal } \def\nnnv{\nonumber\\[5pt] \lal } \def\yy{\\[5pt] {}} \def\yyy{\\[5pt] \lal } \def\eql{\al =\al} \def\eqv{\al \equiv \al} \def\sequ#1{\setcounter{equation}{#1}} \def\dst{\displaystyle} \def\tst{\textstyle} \def\fracd#1#2{{\dst\frac{#1}{#2}}} \def\fract#1#2{{\tst\frac{#1}{#2}}} \def\Half{{\fracd{1}{2}}} \def\half{{\fract{1}{2}}} \def\e{{\,\rm e}} \def\d{\partial} \def\re{\mathop{\rm Re}\nolimits} \def\im{\mathop{\rm Im}\nolimits} \def\arg{\mathop{\rm arg}\nolimits} \def\tr{\mathop{\rm tr}\nolimits} \def\sign{\mathop{\rm sign}\nolimits} \def\diag{\mathop{\rm diag}\nolimits} \def\dim{\mathop{\rm dim}\nolimits} \def\const{{\rm const}} \def\eps{\varepsilon} \def\ep{\epsilon} \def\then{\ \Rightarrow\ } \newcommand{\toas}{\mathop {\ \longrightarrow\ }\limits } \newcommand{\aver}[1]{\langle \, #1 \, \rangle \mathstrut} \newcommand{\vars}[1]{\left\{\begin{array}{ll}#1\end{array}\right.} \def\suml{\sum\limits} \def\intl{\int\limits} \begin{document} \twocolumn[ \jnumber{1}{2011} \Title{Exact solution of the relativistic magnetohydrodynamic equations \yy in the background of a plane gravitational wave\yy with combined polarization\foom 1} \Author{A. A. Agathonov and Yu. G. Ignatyev} {Kazan State Pedagogical University, Mezhlauk str. 1, Kazan 420021, Russia} \Abstract {We obtain an exact solution of the self-consistent relativistic magnetohydrodynamic equations for an anisotropic magnetoactive plasma in the background of a plane gravitational wave metric (PGW) with an arbitrary polarization. It is shown that, in the linear approximation in the gravitational wave amplitude, only the $\mathbf{e_+}$ polarization of the PGW interacts with a magnetoactive plasma.} ] \Talk \section {Introduction} In a series of previous articles by one of the authors (see, e.g., [1--3] a theory of {\it gravimagnetic shock waves} in a homogeneous magnetoactive plasma has been developed. The essence of this phenomenon is that a magnetized plasma in anomalously strong magnetic fields drifts under the action of gravitational waves (GWs) in the GW propagation direction under the condition that the wave amplitude is large enough, and, on a certain wave front, the plasma velocity tends to the speed of light. Its energy density and the intensity of the frozen-in magnetic field then tend to infinity. In the subsequent papers this effect was proved on the basis of the kinetic theory, and the possibility of using this mechanism as an effective tool for detecting GWs from astrophysical sources was also shown. However, in all cited papers, a monopolarized gravitational wave was considered. In the present paper we consider the action of a GW with combined polarization on a magnetoactive plasma. \section{Self-consistent RMHD equations in a gravitational field} In [1], under the assumption that the dynamic velocity of the plasma ($v^i$) is equal to that of the electromagnetic field\footnote {The index ``$p$'' refers to the plasma, the index ``$f$'' to the field, the comma denote a covariant derivative. The dynamic velocity of any kind of matter is, by definition, a timelike unit eigenvector of the energy-momentum tensor of this matter \cite{ig-Synge}.} \beq \label{ig-eq_vel} \stackrel{p}{T}_{ij}v^j=\eps_p v_i;\quad \stackrel{f}{T} _{ij}v^j=\eps_f v_i, \quad (v,v)=1, \eeq a full self-consistent set of relativistic magneto\-hy\-d\-ro\-dy\-na\-mic equations for a magnetized plasma in arbitrary gravitational field has been obtained. It consists of the Maxwell equations of the first group \beq \label{ig-1Maxwell} \stackrel{*}{F}\ \!\!\!^{ik}_{~~,k}=0 \eeq with the necessary and sufficient condition \bear \label{ig-I_inv} {\rm Inv}_1&=&F_{ij}F^{ij}=2H^2>0, \\ \label{ig-II_inv} {\rm Inv}_2&=&\stackrel{*}{F}_{ij}F^{ij}=0, \ear the Maxwell equations of the second group\footnote {$c = G = \hbar = 1$}: \beq \label{ig-2Maxwell} F^{ik}_{~~,k} = -4\pi J^i_{\rm{dr}} \eeq with a spacelike {\it drift current} \beq \label{ig-Jdr} J^i_{\rm{dr}}=-\frac{2F^{ik} \stackrel{p}{T}\ \!\!\!^{l}_{k,l}}{F_{jm}F^{jm}},\quad (J_{\rm{dr}},J_{\rm{dr}}) < 0 \eeq and a conservation law for the total energy-momentum of the system \beq \label{ig-Tik,k} T^{ik}_{~,k}=\stackrel{p}{T}\ \!\!\!^{ik}_{~,k}+ \stackrel{f}{T}\ \!\!\!^{ik}_{~,k}=0. \eeq The energy-momentum tensor (EMT) of the elec\-t\-ro\-mag\-ne\-tic field, in the case of a coincidence of the plasma's and the field's dynamic velocities (\ref{ig-eq_vel}), is expressed through a pair of vectors, $v$ and $H$ \cite{ig-Ign95}: \beq \label{ig-T_f_H} \stackrel{f}{T}\ \!\!\!^{i}_k = -\frac{1}{8\pi}\left[(\delta^i_k-2v^i v_k)H^2+2H^i H_k)\right]. \eeq The EMT of a relativistic anisotropic mag\-ne\-to\-ac\-tive plasma in gravitational and magnetic fields is (see, e.g., \cite{ig-IgnGor97}) \beq \label{ig-T_p} \stackrel{p}{T}\ \!\!\!^{ij}=(\eps +p_\perp)v^iv^j-p_\perp g^{ij}+(p_\parallel -p_\perp)h^ih^j, \eeq where $h^i = H^i/H$ is the spacelike unit vector of the magnetic field ($(h,h)=-1$); $p_\perp$ and $p_\parallel$ are the plasma pressures in the directions orthogonal and parallel to the magnetic field, respectively. \section{Solving the RMHD equations in the PGW metric} Consider a solution of the Cauchy problem of the self-consistent RMHD equations in the background of a vacuum gravitational-wave metric (see, e.g., \cite{ig-torn})\footnote {$\beta(u)$ and $\gamma(u)$ are the amplitudes of the polarizations $\mathbf{e}_+$ and $\mathbf{e}_\times$, respectively; $u =(t-x^1)/\sqrt{2}$ is the retarded time, $v=(t + x^1)/\sqrt{2}$ is the advanced time. The PGW amplitudes are arbitrary functions of the retarded time $u$, and $L(u)$ is a background factor of the PGW.}: \bearr d s^{2} = 2 du dv - L^{2} \biggl[\cosh 2\gamma\biggl(e^{2\beta} (dx^{2})^{2} \nnn \cm \label{ig-01} + 2 e^{-2\beta}( dx^{3})^{2}\biggr) -\sinh 2\gamma dx^2 dx^3\biggr], \ear with homogeneous initial conditions on the null hypersurface $u=0$: \bearr \label{ig-03} \beta(u \leq 0)=0; \quad \beta'(u\leq 0)=0;\quad L(u \leq 0)=1, \nnn \ear We assume the following: \begin{itemize} \item the plasma is homogeneous and at rest: \bearr v^v(u\leq 0)= v^u(u\leq 0) = 1/\sqrt{2}; \nnn v^{2} =v^{3}=0; \qquad \eps(u \leq 0)=\stackrel{0}{\eps}; \nnn \label{ig-04a} p_\parallel(u \leq 0) = \stackrel{0}{p}_\parallel; \qquad p_\perp(u \leq 0) = \stackrel{0}{p}_\perp; \ear \item a homogeneous magnetic field is directed in the $(x^1,x^2)$ plane: \bearr H_1(u \leq 0)=\stackrel{0}{H} \cos\Omega\,; \nnn H_2(u \leq 0)=\stackrel{0}{H} \sin\Omega\,; \nnn \label{ig-05} H_3(u \leq 0) = 0, \qquad E_i(u \leq 0) = 0, \ear where $\Omega$ is the angle between the axis $0x^{1}$ (the PGW propagation direction) and the magnetic field ${\bf H}$. \end{itemize} The metric (\ref{ig-01}) admits the group of isometries $G_{5}$, associated with three linearly independent (at a point) Killing vectors \beq \label{ig-02} \mathop{\xi^{i}}\limits_{(1)} =\delta^{i}_{v}\,; \qquad \mathop{\xi^{i}}\limits_{(2)} = \delta^{i}_{2}\,; \qquad \mathop{\xi^{i}}\limits_{(3)} = \delta^{i}_{3}\,. \eeq Due to their existence in the metric (\ref{ig-01}), all geometric objects, including the Christoffel symbols, the Riemann tensor, the Ricci tensor and consequently the EMT of a magnetoactive plasma, are automatically conserved at motions along the Killing directions: \beq \label{ig-symmetric} \mathop{\mathrm{L}}\limits_{\xi_\alpha}g_{ij}=0\ \Rightarrow\ \mathop{\mathrm{L}}\limits_{\xi_\alpha}R_{ij}=0\ \Rightarrow\ \mathop{\mathrm{L}}\limits_{\xi_\alpha}T_{ij}=0, \eeq where $\mathop{\mathrm{L}}\limits_{\xi}T_{ij}$ is a Lie derivative in the direction of $\xi$. We further require that the EMTs of the plasma $\stackrel{p}{T}_{ij}$ and the electromagnetic field $\stackrel{f}{T}_{ij}$ inherit the symmetry separately. Thus all observed physical quantities $\mathbf{P}$ {\em inherit the symmetry of the metric} (\ref{ig-01}): \beq \label{ig-07} \mathop{\mathrm{L}}\limits_{\xi_\alpha} {\bf P} =0 \cm (\alpha =\overline{1,3}), \eeq i.e., taking into account the explicit form of the Killing vectors (\ref{ig-02}), \bearr \label{ig-08} p=p(u), \quad \eps=\eps(u), \quad v^{i}=v^{i}(u); \yyy \label{ig-09} F_{ik}=F_{ik}(u), \quad H_i=H_i(u), \quad h_i=h_i(u). \ear The vector potential agreeing with the initial conditions (\ref{ig-05}) is \bearr A_{v} = A_{u} = A_{2} = 0; \nnn \label{ig-06} A_3 = \stackrel{0}{H} (x^1 \sin\Omega - x^2\cos\Omega); \qquad (u\leq 0). \ear In the presence of a PGW, the vector potential becomes \bearr A_2=A_v=A_u=0; \nnn \label{ig-Ai} A_3=\stackrel{0}{H}\left(\frac{1}{\sqrt{2}}(v-\psi(u)) \sin\Omega-x^2\cos\Omega \right), \ear where $\psi(u)$ is an arbitrary function of the retarded time, satisfying the initial condition \beq \label{ig-phi0} \psi (u\leq 0) = u. \eeq Thus the magnetic field freezing-in condition in the plasma reduces to the two equalities \bearr \label{ig-vi} v^3 = 0, \nnn \frac{1}{\sqrt{2}}(v_v\psi'-v_u)\sin\Omega+v^2\cos\Omega=0. \ear The covariant components of the vector of magnetic field intensity relative to the Maxwell tensor are \bearr \label{ig-Hv} H_v=-\frac{\stackrel{0}{H}}{L^2} \left(v_v \cos\Omega+\frac{1}{\sqrt{2}} v^2 \sin\Omega \right) \\ \lal \label{ig-Hu} H_u = \frac{\stackrel{0}{H}}{L^2} \left( v_u \cos\Omega- \frac{1}{\sqrt{2}}v^2 \psi' \sin\Omega \right), \\ \lal \label{ig-H2} H_2 = -\frac{1}{\sqrt{2}} \stackrel{0}{H} \cosh2\gamma e^{2\beta} \sin\Omega (v_v \psi'+ v_u ), \\ \lal \label{ig-H3} H_3 = \frac{1}{\sqrt{2}} \stackrel{0}{H} \sinh2\gamma \sin\Omega (v_v \psi'+ v_u ). \ear The magnetic field intensity squared is \beq \label{ig-33} H^2 = \frac{\stackrel{0}{H}\ \!\!\!^{2}}{L^4} (L^2 \psi' \cosh2\gamma e^{2\beta} \sin^2\Omega + \cos^2\Omega )\,. \eeq Using (\ref{ig-Hv})-(\ref{ig-33}), the normalization relation for the velocity vector can be written in the equivalent form \bearr \label{ig-34} \left[ v_v \cos\Omega + v_2 \frac{1}{\sqrt{2}} \sin\Omega \right]^2 \nnn \cm = \frac{H^2}{\stackrel{0}{H}\ \!\!\!^{2}} v^2_v L^4 - \frac{\sin^2 \Omega}{2} L^2 \cosh2\gamma e^{2\beta}\,. \ear The components of the drift current are \beq \label{ig-curr} J^i_{\rm{dr}} = -\frac{1}{4\pi L^2}\d_u (L^2 F^{iu}). \eeq Then, \bearr \label{ig-J^v} J^v_{\rm{dr}} = J^u_{\rm{dr}} = 0, \yyy \label{ig-J^2} J^2_{\rm{dr}} = -\frac{\stackrel{0}{H}\sin\Omega}{2\sqrt{2}\pi L^2} \cosh2\gamma\cdot\gamma' , \yyy \nq \label{ig-J^3} J^3_{\rm{dr}} = -\frac{\stackrel{0}{H}\sin\Omega e^{2\beta}} {2\sqrt{2}\pi L^2}(\sinh2\gamma\cdot\gamma'+\cosh2\gamma\cdot\beta'). \ear Because of existence of the isometries (\ref{ig-02}), we obtain the following integrals \cite{ig-Ign95}: \beq \label{ig-35} L^2 \mathop{\xi}\limits_{(\alpha)}{}^i T_{v i} = C_a = \const \qquad (\alpha = \overline{1,3})\,. \eeq We consider only the case of {\it transverse PGW propagation\/} ($\Omega=\pi/2$). Then, substituting the expres\-si\-ons for the plasma and electromagnetic field EMT into the integrals (\ref{ig-35}), using the relations (\ref{ig-H3})-(\ref{ig-34}) and also the initial conditions (\ref{ig-03}), we bring the integrals of motion to the form \bearr \label{ig-C1} 2 L^2 (\eps + p_\parallel) v_v^2 - (p_\parallel - p_\perp) \frac{\stackrel{0}{H}\ \!\!\!^{2}}{H^2} \cosh2\gamma e^{2\beta} \nnn \inch = (\stackrel{0}{\eps} + \stackrel{0}{p}) \Delta(u), \yyy \label{ig-C2} L^2 (\eps + p_\parallel) v_v v_2 = 0, \yyy \label{ig-C3} L^2 (\eps + p_\parallel) v_v v_3 = 0, \ear where \beq \label{ig-38} \stackrel{0}{p} = \stackrel{0}{p}_\perp, \eeq and the so-called {\it governing function of the GMSW} is introduced: \beq \label{ig-40} \Delta(u) = 1 - \alpha^2 (\cosh2\gamma e^{2\beta} - 1)\,, \eeq with the {\it dimensionless parameter} $\alpha^2$, \beq \label{ig-alpha} \alpha^2 = \frac{\stackrel{0}{H}\ \!\!\!^{2}}{4\pi (\stackrel{0}{\eps} + \stackrel{0}{p})}\,. \eeq Solving (\ref{ig-C1}) with respect to $v_v$, we obtain expressions for the components of the velocity vector as functions of the scalars $\eps$, $p_\parallel$, $p_\perp$, $\psi'$ and explicit functions of the retarded time: \bearr \label{ig-Vv} v_v^2 =\frac{(\stackrel{0}{\eps}\stackrel{0}{p})} {2L^2(\eps + p_\parallel)} \Delta(u) \inch \nnn \cm + \frac{(p_\parallel - p_\perp)}{(\eps + p_\parallel)} \frac{\stackrel{0}{H}\ \!\!\!^{2}}{H^2} \frac{\cosh2\gamma e^{2\beta}}{2L^2}. \ear From (\ref{ig-C2}), (\ref{ig-C3}) we get: \beq \label{ig-V2} v_2 = v_3=0 \,. \eeq We obtain the component $v_u$ from the normalization relation for the velocity vector, using (\ref{ig-Vv}) and (\ref{ig-V2}): \beq \label{ig-Vu} v_u = \frac{1}{2 v_v}\,, \eeq and from the freezing-in condition (\ref{ig-vi}) we get the value of the derivative of potential $\psi'$: \beq \label{ig-psi} \psi' = \frac{1}{2 v_v^2}. \eeq Using it, the scalar $H^2$ is determined from the relation (\ref{ig-33}): \beq \label{ig-H^2(perp)} H^2 = \frac{\stackrel{0}{H}\ \!\!\!^{2}}{L^2} \frac{\cosh2\gamma e^{2\beta}}{2 v_v^2}. \eeq From the RMHD set of equations it is possible to obtain the following differential equation in the PGW metric: \bearr \label{ig-47} L^2 \eps' v_v + (\eps + p_\parallel)(L^2 v_v)' \cm\cm \nnn \cm + \frac{1}{2}L^2 (p_\parallel - p_\perp) v_v (\ln H^2)' = 0\,. \ear To solve this equation, it is necessary to impose two additional relations between the functions $\eps$, $p_\parallel$, and $p_\perp$, i.e., an equation of state: \beq \label{ig-48} p_\parallel = f(\eps)\,; \quad p_\perp = g(\eps)\,. \eeq \section{Barotropic equation of state} Consider a barotropic equation of state of the anisotropic plasma, where the relations (\ref{ig-48}) are linear: \beq \label{ig-49} p_\parallel = k_\parallel \eps \,; \quad p_\perp = k_\perp \eps\,. \eeq Equation (\ref{ig-47}) is easily integrated under the conditions (\ref{ig-49}), and we get one more integral: \beq \label{ig-50} \eps (\sqrt{2} L^2 v_v)^{(1 + k_\parallel)} H^{(k_\parallel - k_\perp)} = \stackrel{0}{\eps} \stackrel{0}{H}\ \!\!\!^{(k_\parallel - k_\perp)}\,. \eeq In the case of a barotropic equation of state under the conditions (\ref{ig-49}), substitution of (\ref{ig-H^2(perp)}) into (\ref{ig-Vv}) results in \beq \label{ig-54} v^2_v = \frac{1}{2}\frac{\stackrel{0}{\eps}}{L^2 \eps}\Delta (u)\,. \eeq Substituting (\ref{ig-H^2(perp)}) and (\ref{ig-54}) into (\ref{ig-50}), we obtain a closed equation with respect to the variable $\eps$, whose solution gives: \bearr \label{ig-bar_E} \eps = \stackrel{0}{\eps} \Big[ \Delta^{1+k_\perp} L^{2(1+k_\parallel)} (\cosh2\gamma e^{2\beta})^{k_\parallel-k_\perp} \Big]^{-g_\perp}, \yyy \label{ig-bar_Vv} v_v = \frac{1}{\sqrt{2}} \left[ \Delta L^{(k_\parallel+k_\perp)} (\cosh 2\gamma e^{2\beta})^{\frac{k_\parallel-k_\perp}{2}} \right]^{g_\perp}, \yyy \label{ig-bar_H} \displaystyle H = \stackrel{0}{H} \left[ \Delta L^{(1+k_\parallel)} (\cosh 2\gamma e^{2\beta})^{-\frac{1-k_\parallel}{2}} \right]^{-g_\perp}\,, \ear where \beq \label{ig-58} g_\perp = \frac{1}{1 - k_\perp} \in [1, 2]\,. \eeq In particular, for an ultrarelativistic plasma with zero parallel pressure, \beq \label{ig-59} k_\parallel \to 0\,; \quad k_\perp \to \frac{1}{2} \eeq we obtain from (\ref{ig-bar_E})--(\ref{ig-58}): \bearr \label{ig-60} v_v = \frac{1}{\sqrt{2}} L \Delta^2 (\cosh2\gamma e^{2\beta})^{-1/2}, \\ \lal \label{ig-61} \eps = \stackrel{0}{\eps} L^{-4} \Delta^{-3} (\cosh2\gamma e^{2\beta}), \yyy \label{ig-62} H = \stackrel{0}{H} L^{-2} \Delta^{-2} (\cosh2\gamma e^{2\beta})\,. \ear \section{The energy balance equation} In \cite{ig-Ign95}, it has been shown that the singular state, which exists in a magnetized plasma under the condition $2 \beta_0 \alpha^2 > 1$ on the hypersurface \beq \label{ig-62} \Delta(u_*) = 0\,, \eeq is removed using the back reaction of the magnetoactive plasma on the GW. That leads to efficient absorption of GW energy by the plasma and a restric\-ti\-on on the GW amplitude. A qualitative analysis of this situation can be carried out using a simple model of energy balance proposed in \cite{ig-Ign96}. The energy flow of the magnetoactive plasma is directed along the PGW propagation direction, i.e., along the $x^1$ axis. Let $\beta_*(u)$ and $\gamma_*(u)$ be the vacuum PGW amplitudes. In the WKB approxi\-ma\-ti\-on, \beq \label{ig-WKB} 8\pi\eps \ll \omega^2\,, \eeq where $\omega$ is the characteristic PGW frequency and $\eps$ is the matter energy density, all functions still depend on the retarded time only (see \cite{ig-IgnBal81}). Thus $\beta(u)$ and $\gamma(u)$ are the PGW amplitudes subject to absorption in plasmas. The local energy conservation law should be satisfied: \beq \label{ig-eq_T_41} T^{41}(\beta,\gamma) + \stackrel{g}{T}{}^{41}(\beta,\gamma) = \stackrel{g}{T}{}^{41}(\beta_*,\gamma_*)\, , \eeq where $\stackrel{g}{T}{}^{41}(\beta,\gamma)$ is the energy flow of a weak GW in the direction $0x^{1}$ (see \cite{ig-land}). In the case of transversal PGW propagation and with a barotropic equation of state of an anisotropic plasma, using the solutions of magnetohy\-d\-ro\-dy\-na\-mics and \eqs (\ref{ig-bar_E}), (\ref{ig-bar_Vv}), (\ref{ig-bar_H}) with the dimensionless parameter $\alpha^2$ (\ref{ig-alpha}), one can obtain the energy balance equation in the form \bearr \label{ig-eq_T_temp2} \frac{\stackrel{0}{H}\ \!\!\!^{2}}{4}\left( \Delta^{-4 g_\perp} - 1\right) \left(\frac{1}{\alpha^2} + 1\right) \nnn \cm + (\gamma')^2 + (\beta')^2 = (\gamma'_*)^2 + (\beta'_*)^2. \ear Since, in a linear approximation by smallness of the amplitudes $\beta$ and $\gamma$, the governing function (\ref{ig-40}) does not depend on the function $\gamma(u)$, \beq \label{ig-65} \Delta(u) = 1 - 2 \alpha^2 \beta +O(\beta^2,\gamma^2)\,, \eeq and the functions $\beta(u)$, $\gamma(u)$ are arbitrary and functionally indepen\-dent, then, up to $\beta^2, \gamma^2$, the relation (\ref{ig-eq_T_temp2}) can be split into two indepen\-dent parts: \bearr \label{ig-b} 2\stackrel{0}{H}\ \!\!\!^{2} g_\perp (1+\alpha^2)\beta+(\beta')^2 =(\beta'_*)^2, \yyy (\gamma')^2=(\gamma'_*) ^2. \label{ig-g} \ear Here, according to the meaning of the local energy balance equation, we consider short gravitational waves (\ref{ig-WKB}), so we can neglect the squares of the PGW amplitudes as compared with the squares of their derivatives with respect to the retarded time. Thus, according to (\ref{ig-g}), \beq \label{ig-66} \gamma_*(u) = \gamma(u), \eeq i.e., in the linear approximation, a weak gravitational wave with the polarization ${\bf e}_{\times}$ does not interact with a magnetized plasma. This coincides with the conclu\-si\-on of the paper \cite{ig-IgnKhu86}. Thus the energy balance equation takes the form obtained in \cite{ig-IgnGor97}: \beq \label{ig-77} \dot{\Delta}^2 + \xi^2 \Upsilon^2 \Bigl[\Delta^{- 4g_\perp} - 1 \Bigr] = \Upsilon^2 \sin^2(s), \eeq where $\xi^2$ is the so-called {\it first parameter of the GMSW} \cite{ig-Ign96}: \bearr \label{ig-71} \xi^2 = \frac{\stackrel{0}{H}\ \!\!\!^{2}}{4 \beta^2_0 \omega^2}, \yyy \label{ig-72} \Upsilon = 2\alpha^2\beta_0 \ear --- {\it the second GMSW parameter}. The dot denotes differentiation with respect to the dimensionless time variable $s$, \beq \label{ig-69} s = \sqrt{2} \omega u. \eeq \section{Conclusion} Thus we have obtained a generalization of the results of \cite{ig-Ign95}-\cite{ig-IgnGor97} to gravitational waves with two polarizations and showed that, in the linear approximation, the polarization $\mathbf{e}_\times$ does not interact with a magnetized plasma. This justifies the applicability of the previously obtained results for arbitrarily polarized gravitational waves. \small
1811.11088
\section{Introduction} Low rank models have been applied to numerous vision applications ranging from high level shape and deformation to pixel appearance models \cite{tomasi-kanade-ijcv-1992,bregler-etal-cvpr-2000,yan-pollefeys-pami-2008,garg-etal-cvpr-2013,basri-etal-ijcv-2007,garg-etal-ijcv-2013,wang-etal-2012,canyi-etal-cvpr-2014}. When the sought rank is known, a commonly occurring formulation is the least squares minimization \begin{equation} \min_{{\mathrm{ rank}}(X)\leq r} \|\mathcal{A} X-b\|^2, \label{eq:fixedrankA} \end{equation} where $\mathcal{A}:\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^p$ is a linear operator, and $\|\cdot\|$ is the standard Euclidean vector norm. In general, this is a difficult non-convex problem and some versions are even known to be NP-hard \cite{gillis-glineur-siam-2011}. In structure from motion, a popular approach \cite{buchanan-fitzgibbon-cvpr-2005} is to optimize over a bilinear factorization $X=BC^T$, where $B$ is $m \times r$ and $C$ is $n\times r$, and solve \begin{equation} \min_{B,\,C} \|\mathcal{A} BC^T-b\|^2. \label{eq:knownrankbilin} \end{equation} Since the rank is bounded by the number of columns in $B$ and $C$ this approach explicitly parametrizes the set of matrices of rank $r$. While bilinear approaches often perform well \cite{hong-fitzgibbon-cvpr-2015,eriksson-hengel-pami-2012} they can have local minima~\cite{buchanan-fitzgibbon-cvpr-2005}. Recent works \cite{hong-fitzgibbon-cvpr-2015,hong-etal-eccv-2016,hong-etal-cvpr-2017,hong-zach-cvpr-2018} have, however, shown that properly implemented, LM and VarPro approaches are remarkably robust to local minima, achieve quadratic convergence and give impressive reconstruction results. Recently \cite{ge-etal-nips-2016,bhohanapali-nips-2016,ge-etal-arxiv-2017} was able to give conditions which guarantee that there are no "spurious" local minimizers (meaning that all local minimizers are close to or identical to the global solution). They use the notion of restricted isometry property (RIP) \cite{recht-etal-siam-2010} which assumes that the operator $\mathcal{A}$ fulfills \begin{equation} (1-\delta_r)\|X\|_F^2 \leq \|\mathcal{A} X\|^2 \leq (1+\delta_r)\|X\|_F^2, \label{eq:RIP} \end{equation} with $0\leq\delta_r<1$, if ${\mathrm{ rank}}(X) \leq r$. If the isometry constant $\delta_r$ is sufficiently small \cite{ge-etal-nips-2016,ge-etal-arxiv-2017,bhohanapali-nips-2016} prove that every local minimizer is optimal (or near optimal). Similarly, for the matrix completion problem \cite{ge-etal-arxiv-2017} showed that there are no spurious local minima under uniformly distributed missing data. While the above theoretical assumptions generally do not hold for computer vision problems such as structure from motion, these results still give some intuition as to why bilinear parameterization often works well. An alternative approach is to optimize directly over the entries of $X$ and enforce low rank using regularization terms. Applying a robust function $f$ to the singular values $\sigma_i(X) = 1,\ldots,N=\min(m,n)$ results in a low-rank inducing objective \begin{equation} \min_X \mathcal{R}(X)+\|\mathcal{A} X-b\|^2, \label{eq:generalregformulation} \end{equation} where $ \mathcal{R}(X) = \sum_{i=1}^{N} f(\sigma_i(X)). $ Besides controlling the rank of the solution the generality of the function $f$ offers increased modeling capability compared to \eqref{eq:fixedrankA} and can for example be used to add priors on the size of the non-zero singular values. The most popular regularization approach is undoubtedly the nuclear norm, $f(\sigma_i(X))=\sigma_i(X)$, due to its convexity \cite{fazel-etal-acc-2015,recht-etal-siam-2010,oymak2011simplified,candes-etal-acm-2011,candes2009exact}. Under the RIP assumption exact or approximate recovery with the nuclear norm can then be guaranteed \cite{recht-etal-siam-2010,candes2009exact}. On the other hand, since it penalizes large singular values, it suffers from a shrinking bias \cite{cabral-etal-iccv-2013,canyi-etal-cvpr-2014,larsson-olsson-ijcv-2016}. Ideally $f$ should penalize small singular values (assumed to stem from measurement noise) harder than the large ones. Therefore non-increasing derivatives on $[0,\infty)$, or concavity, has been shown to give stronger relaxations \cite{oymak-etal-2015,mohan2010iterative,hu-etal-pami-2013,oh-etal-pami-2016,canyi2015,toh-yun-2010,gu-2016}. These non-convex formulations usually only come with local convergence guarantees. Two exceptions are \cite{larsson-olsson-ijcv-2016,olsson-etal-iccv-2017} which gave optimality guarantees for \eqref{eq:generalregformulation} with $f=f_\mu$ as in \eqref{eq:fmu}. The regularization term is generally not differentiable as a function of $X$. Thus, optimization methods based on local quadratic approximation become infeasible. Figure~\ref{fig:objfuns} gives a simple illustration on a 1-dimensional example of how non-differentiability occurs at the origin. In addition it is well known that the singular values become non-differentiable functions of the matrix elements when they are non distinct. To circumvent these issues subgradient and splitting methods are often employed \cite{canyi2015,toh-yun-2010,gu-2016,nie-2012,larsson-olsson-ijcv-2016}. It is well known from basic optimization theory (\eg{}~\cite{boyd-vandenberghe-2004}) that gradient based methods exhibit slow convergence for ill-conditioned problems. It has also been observed (\eg{}~\cite{boyd-etal-2011}) that splitting methods rapidly reduce the objective value the first couple of iterations, while convergence to the exact solution can be slow. In this paper we show that there are computer vision problems where these approaches make very little improvements at all, returning a solution that is far from optimal. In contrast, bilinear formulations with either LM or VarPro can be made to yield accurate results in few iterations \cite{hong-fitzgibbon-cvpr-2015}. An alternative approach that unifies bilinear parameterization with regularization approaches is based on the observation \cite{recht-etal-siam-2010} that the nuclear norm $\|X\|_*$ of a matrix $X$ can be expressed as $ \|X\|_* = \min_{BC^T = X} \frac{\|B\|_F^2+\|C\|_F^2}{2}. $ Thus when $f(\sigma_i(X)) = \mu \sigma_i(X)$, where $\mu$ is a scalar controlling the strength of the regularization, optimization of \eqref{eq:generalregformulation} can be formulated as \begin{equation} \min_{B,C} \mu \frac{\|B\|_F^2+\|C\|_F^2}{2}+ \|\mathcal{A} BC^T - b\|^2. \label{eq:nuclearbilin} \end{equation} Optimizing directly over the factors has the advantages that the number of variables is much smaller and one may add constraints if a particular factorization is sought. Surprisingly, while \eqref{eq:nuclearbilin} is non-convex, using the convexity of the underlying regularization problem \eqref{eq:generalregformulation} it can be shown that any local minimizer $B$,$C$ with ${\mathrm{ rank}}(B C^T) < k$, where $k$ is the number of columns in $B$ and $C$, is globally optimal \cite{bach-arxiv-2013,haeffele-vidal-arxiv-2017}. Additionally, the objective function is two times differentiable and second order methods can be employed. \begin{figure*} \centering \def0.5{0.75} \def38mm{27mm} \begin{tabular}{cccccc} SCAD \cite{fan2001variable}: & Log \cite{friedman-2012}: & MCP \cite{zhang2010nearly}: & ETP \cite{gao-etal-AAAI-2011}: & Geman \cite{geman-yang-1995}: \\ \includegraphics[width=38mm]{SCAD} & \includegraphics[width=38mm]{log} & \includegraphics[width=38mm]{MCP} & \includegraphics[width=38mm]{ETP} & \includegraphics[width=38mm]{Geman} \end{tabular} \caption{A few commonly occurring robust penalties of the form $f(\sigma)$, with $\sigma \in [0,\infty)$ and $f$ differentiable everywhere (blue graph). The green dashed graph shows how non-differentiability occurs at the origin when applying the penalty to a $1 \times 1$ matrix $x\in \mathbb{R}$. In this case $\sigma(x)=|x|$ and therefore $f(\sigma(x)) = f(|x|)$. Note also that \eqref{eq:fmu} is a special case of MCP.} \label{fig:objfuns} \end{figure*} In this paper we develop new regularizing terms that, similar to \eqref{eq:nuclearbilin}, work on the bilinear factors. However, in contrast to previous approaches we investigate formulations that exhibit less shrinking bias and go beyond convex penalties. Specifically, we prove that $\mathcal{R}(X) = \min_{X=BC^T} \tilde{\mathcal{R}}(B,C)$, where \begin{equation} \tilde{\mathcal{R}}(B,C) = \sum_{i=1}^{k} f\left(\frac{\|B_i\|^2+\|C_i\|^2}{2}\right), \end{equation} $k$ is the number of columns, and $B_i$ and $C_i$ are the $i$:th columns of $B$ and $C$, respectively. The result holds for a general class of concave penalty functions $f$, a few of which are illustrated in Figure~\ref{fig:objfuns}. In view of the above result, we propose to minimize \begin{equation} \tilde{\mathcal{R}}(B,C)+\|\mathcal{A} B C^T - b\|^2. \label{eq:generalbilin} \end{equation} Rather than resorting to splitting or subgradient methods we present an algorithm that uses a quadratic approximation of the objective. Under the assumption that $f$ is differentiable, we show that our quadratic approximation reduces to a weighted version of \eqref{eq:nuclearbilin} to which we can apply VarPro. We show on several computer vision problems that our approach outperforms state-of-the-art methods such as \cite{shang-etal-2018,canyi2015,toh-yun-2010,gu-2016,boyd-etal-2011}. While our problem is non-convex (both in the $X$ parameterization \eqref{eq:generalregformulation} and in the $B$, $C$ parameterization \eqref{eq:generalbilin}) we show that in some cases it is still possible to give global optimality guarantees. Building on the results of \cite{olsson-etal-iccv-2017} we characterize the local minima of the new formulation with the choice \begin{equation} f(x) = f_\mu(x) := \mu - \max(\sqrt{\mu}-x,0)^2. \label{eq:fmu} \end{equation} Specifically, for this choice, we give conditions that ensure that when a RIP constraint~\cite{recht-etal-siam-2010} holds a local minimizer of \eqref{eq:generalbilin} is a global solution of both \begin{equation} \min_{{\mathrm{ rank}}(X)\leq r} \mathcal{R}(X) + \|\mathcal{A} X - b\|^2, \label{eq:regminprobl} \end{equation} where $\mathcal{R}(X) = \sum_i f_\mu(\sigma_i(X))$, and \begin{equation} \min_{{\mathrm{ rank}}(X)\leq r} \mu {\mathrm{ rank}}(X) + \|\mathcal{A} X - b\|^2. \label{eq:rankminprobl} \end{equation} In summary our main contributions are: \begin{itemize} \item A new stronger non-convex regularization term for bilinear parameterizations with less/no shrinking bias. \item A new iteratively reweighed VarPro algorithm optimizing accurate quadratic approximations. \item Theoretical conditions that guarantee optimal recovery under the RIP constraint. \item An experimental evaluation that shows that our methods outperforms state-of-the-art methods on several real computer vision problems. \end{itemize} \subsection{Related Work} Our work is very much inspired by a recent series of papers by Hong \etal \cite{hong-fitzgibbon-cvpr-2015,hong-etal-eccv-2016,hong-etal-cvpr-2017,hong-zach-cvpr-2018} which show that bilinear formulations can be made remarkably robust to local minima, and achieve impressive reconstruction results for uncalibrated structure from motion problems, using the so called VarPro method. Our work represents an attempt to unify this line of work with regularization based alternatives, leveraging the benefits of them both. \iffalse These works have studied a number of optimization methods that use bilinear formulations. The method of choice is the so called VarPro approach where one of the factors $B$ and $C$ is eliminated through marginalization and iterations are performed over the other. An explanation as to why VarPro does not exhibit the ``stalling'' typically observed in alternating approaches is given in~\cite{hong-etal-cvpr-2017}. It is also shown how to modify LM to achieve the same performance. In~\cite{hong-zach-cvpr-2018} VarPro was applied to a formulation which allows projection models with non-parallel viewing rays of the form \eqref{eq:fixedrankA}. High quality reconstruction and robustness to local minima was again illustrated starting from random initialization. Our work represents an attempt to unify this line of work with regularization based alternatives, leveraging the benefits of them both, namely efficient optimization and theoretical optimality guarantees. \fi An approach that is closely related to ours is that of \cite{cabral-etal-iccv-2013} which uses \eqref{eq:nuclearbilin} to unify the use of a regularized objective and factorization. They show that if the obtained solution has lower rank than its number of columns it is globally optimal. In practice \cite{cabral-etal-iccv-2013} observes that the shrinking bias of the nuclear norm makes it too weak to enforce a low rank when the data is noisy. Therefore, a ``continuation'' approach where the size of the factorization is gradually reduced is proposed. While this yields solutions with lower rank, the optimality guarantees no longer apply. Bach \etal \cite{bach-arxiv-2013} showed that \begin{equation} \|X\|_{s,t}:=\min_{X=BC^T} \sum_{i=1}^k\frac{\|B_i\|_s^2 + \|C_i\|_t^2}{2}, \label{eq:decomposition-norm} \end{equation} is convex for any choice of vector norms $\|\cdot\|_s$ and \mbox{$\|\cdot\|_t$}. In \cite{haeffele-vidal-arxiv-2017} it was shown that a more general class of 2-homogeneous factor penalties result in a convex regularization similar to \eqref{eq:decomposition-norm}. The property that a local minimizer $B$, $C$ with ${\mathrm{ rank}}(B C^T) < k$, is also extended to this case. Still, because of convexity, it is clear that these formulations will suffer from a similar shrinking bias as the nuclear norm. Shang \etal \cite{shang-etal-2018} showed that penalization with the Schatten semi-norms $\|X\|_q = \sqrt[q]{\sum_{i=1}^N \sigma_i(X)^q}$, for $q=1/2$ and $2/3$, can be achieved using a convex penalty on the factors $B$ and $C$. A generalization to general values of $q$ is given in \cite{xu-etal-AAAI-2017}. While this reduces shrinking bias to some extent, it results in a non-differentiable and non-convex formulation that is optimized with ADMM. It is important to note that many of the above methods that are considered state-of-the-art have been developed for low level vision tasks such as image denoising, inpainting, alignment and background subtraction. The ground truth for these models are often of higher rank than models in~\eg{}~structure from motion, making it possible to obtain good results with weaker regularization. Additionally, as we will see in the experiments, more difficult data terms prevent rapid convergence of the splitting methods they often employ. \section{Non-Convex Penalties and Shrinking Bias} In this section we will show how to formulate regularization terms of the type \begin{equation} \mathcal{R} (X) = \sum_{i=1}^N f(\sigma_i(X)), \label{eq:regdef} \end{equation} by penalizing the factors of the factorization~$X=BC^T$. We assume that $B$ and $C$ have $k$ columns, making $\sigma_i(X)=0$ if $i>k$ and ${\mathrm{ rank}}(X)\leq k$. Note, however, that we are aiming to achieve a lower rank using the regularization term. In many applications, the sought rank is unknown and should be determined by the regularization. We therefore set $k$ large enough not to exclude the optimal solution. As we shall see in Section~\ref{sec:optloc}, this ability to over-parameterize can be used to ensure optimality. \begin{theorem}\label{thm:main} If $f$ is concave, non-decreasing on $[0,\infty)$ and $f(0)=0$ then \begin{equation} \mathcal{R}(X) = \min_{BC^T = X} \sum_{i=1}^k f(\|B_i\| \|C_i\|), \label{eq:bilinear1} \end{equation} where $B_i$ and $C_i$, $i=1,...,k$ are the columns of $B$ and $C$ respectively. \end{theorem} \begin{proof} The result is a consequence of the fact that $\mathcal{R}$ will fulfill a triangle inequality $\mathcal{R}(X+Y)\leq \mathcal{R}(X)+\mathcal{R}(Y)$ under the assumptions on $f$. This is clear from Theorem~4.4 in~\cite{uchiyama-2005} which shows that \begin{equation} \sum_{i=1}^N f(\sigma_i(X+Y)) \leq \sum_{i=1}^N (f(\sigma_i(X))+\sum_{i=1}^N f(\sigma_i(Y))). \end{equation} Applying this to $X = BC^T = \sum_{i=1}^k B_i C_i^T$ we see that \begin{equation} \mathcal{R}(X) = \mathcal{R}(\sum_{i=1}^k B_i C_i^T) \leq \sum_{i=1}^k \mathcal{R}(B_i C_i^T). \end{equation} Since ${\mathrm{ rank}}(B_i C_i^T)=1$ we also have \begin{equation} \mathcal{R}(B_i C_i^T) = f(\sigma_1(B_i C_i^T)) = f(\|B_i C_i^T\|_F). \end{equation} Lastly, since $\|B_i C_i^T\|_F = \|B_i\| \|C_i\|$ we get \begin{equation} \mathcal{R}(X) \leq\sum_{i=1}^k f(\|B_i\| \|C_i\|). \end{equation} To see that equality can be achieved, let \mbox{$B_i = \sqrt{\sigma_i(X)}U_i$} and $C_i = \sqrt{\sigma_i(X)}V_i$, where $X = \sum_{i=1}^k \sigma_i(X) U_i V_i^T$ is the SVD of $X$. Then, $BC^T=X$ and $f(\|B_i\|\|C_i\|) = f(\sigma_i(X))$. \end{proof} While the above result allows optimization over the factors $B$ and $C$ we note that it yields an objective that is non-differentiable at $\|B_i\|\|C_i\| = 0$. Next we reformulate the objective to achieve a differentiable problem formulation. \begin{cor}\label{cor:main} Under the assumptions of Theorem~\ref{thm:main}, it follows that $\mathcal{R}(X) = \min_{X=BC^T} \tilde{\mathcal{R}}(B,C)$, where \begin{equation} \tilde{\mathcal{R}}(B,C) = \sum_{i=1}^k f\left(\frac{\|B_i\|^2+\|C_i\|^2}{2}\right). \label{eq:Rtildedef} \end{equation} If $f$ is differentiable then $\tilde{R}(B,C)$ is also differentiable. \end{cor} \begin{proof} By the rule of arithmetic and geometric means \begin{equation} \|B_i\|\|C_i\| \leq \frac{1}{2} (\|B_i\|^2+\|C_i\|^2), \end{equation} with equality if $\|B_i\| = \|C_i\|$ which is achieved when \mbox{$B_i = \sqrt{\sigma_i(X)}U_i$} and $C_i = \sqrt{\sigma_i(X)}V_i$. Since $f$ is assumed to be non-decreasing, it follows from~\eqref{eq:bilinear1}, that $\mathcal{R}(X) = \min_{X=BC^T} \tilde{\mathcal{R}}(B,C)$. The differentiability of $\tilde{\mathcal{R}}(B,C)$ is now trivially checked using the chain rule. \end{proof} \newcommand{\STAB}[1]{\begin{tabular}{@{}c@{}}#1\end{tabular}} \begin{table*}[htpb] \centering \caption{Distance to ground truth (normalized) mean valued over 20 problem instances for different percentages of missing data, missing data patterns and noise levels~$\sigma$. Best results are marked in bold.} \setlength\tabcolsep{0.1cm} {\footnotesize \rowcolors{2}{blue!15}{white!10} \begin{tabular}{r | rrrrrr | rr | rr | r} Missing\\data (\%) & PCP~\cite{candes-etal-acm-2011} & WNNM~\cite{gu-2016} & Unifying~\cite{cabral-etal-iccv-2013} & LpSq~\cite{nie-2012} & S12L12~\cite{shang-etal-2018} & S23L23~\cite{shang-etal-2018} & IRNN~\cite{canyi2015} & APGL~\cite{toh-yun-2010} & $\norm{\cdot}_*$~\cite{boyd-etal-2011} & $\mathcal{R}$~\cite{larsson-olsson-ijcv-2016} & Our\\ \toprule 0 & \textbf{0.0000} & \textbf{0.0000} & \textbf{0.0000} & \textbf{0.0000} & 0.0002 & 0.0002 & \textbf{0.0000} & \textbf{0.0000} & 0.1727 & \textbf{0.0000} & \textbf{0.0000} \\ 10 &0.0885 & 0.0028 & 0.0713 & 0.0213 & 0.0309 & 0.0071 & \textbf{0.0000} & \textbf{0.0000} & 0.1998 & \textbf{0.0000} & \textbf{0.0000} \\ 20 &0.2720 & 0.2220 & 0.1491 & 0.0170 & 0.0412 & 0.0209 & \textbf{0.0000} & \textbf{0.0000} & 0.2223 & 0.0128 & \textbf{0.0000} \\ 30 &0.7404 & 0.4787 & 0.7499 & 0.0003 & 0.0818 & 0.0895 & \textbf{0.0000} & 0.0014 & 0.2897 & 0.2346 & \textbf{0.0000} \\ 40 &1.0000 & 0.6097 & 0.9553 & 0.1083 & 0.1666 & 0.1360 & \textbf{0.0000} & 0.0017 & 0.3374 & 0.2198 & \textbf{0.0000} \\ \multirow{-6}{*}{\STAB{\rotatebox[origin=c]{90}{\scriptsize Uniform ($\sigma=0.0$)}}\phantom{AA}} 50 &1.0000 & 0.7170 & 1.0000 & 0.0315 & 0.1376 & 0.1001 & 0.0003 & 0.0301 & 0.4266 & 0.2930 & \textbf{0.0000} \\ \midrule 0 & \textbf{0.0000} & \textbf{0.0000} & \textbf{0.0000} & \textbf{0.0000} & 0.0002 & 0.0002 & \textbf{0.0000} & \textbf{0.0000} & 0.1810 & \textbf{0.0000} & \textbf{0.0000} \\ 10 & 0.3160 & 0.2734 & 0.1534 & 0.0839 & 0.1296 & 0.1233 & 0.0772 & 0.0834 & 0.2193 & 0.0793 & \textbf{0.0658} \\ 20 & 0.4877 & 0.4499 & 0.3017 & 0.1650 & 0.2389 & 0.2456 & \textbf{0.1010} & 0.1786 & 0.3436 & 0.2494 & 0.1018 \\ 30 & 0.5821 & 0.5395 & 0.5486 & 0.2520 & 0.3289 & 0.3160 & \textbf{0.1189} & 0.2572 & 0.4299 & 0.3421 & \textbf{0.1189} \\ 40 & 0.7072 & 0.6317 & 0.7376 & 0.2853 & 0.4084 & 0.4110 & 0.1417 & 0.2913 & 0.4825 & 0.5004 & \textbf{0.1385} \\ \multirow{-6}{*}{\STAB{\rotatebox[origin=c]{90}{\scriptsize Tracking ($\sigma=0.0$)}}\phantom{AA}} 50 & 0.8125 & 0.7257 & 0.9521 & 0.4178 & 0.4267 & 0.4335 & 0.2466 & 0.4047 & 0.5754 & 0.6503 & \textbf{0.2214} \\ \midrule 0 & 0.0409 & 0.0207 & 0.0407 & 0.0450 & 0.0437 & 0.0435 & 0.0448 & 0.0191 & 0.1581 & \textbf{0.0166} & \textbf{0.0166} \\ 10 &0.3157 & 0.2734 & 0.1585 & 0.0848 & 0.0529 & 0.0518 & 0.0625 & 0.0696 & 0.2312 & 0.0488 & \textbf{0.0438} \\ 20 &0.4771 & 0.4338 & 0.3480 & 0.1394 & 0.0995 & \textbf{0.0982} & 0.1090 & 0.1188 & 0.3109 & 0.2071 & 0.0983 \\ 30 &0.5801 & 0.5225 & 0.4726 & 0.2026 & 0.2468 & 0.2592 & 0.1646 & 0.1993 & 0.3820 & 0.3465 & \textbf{0.1475} \\ 40 &0.7122 & 0.6148 & 0.8638 & 0.2225 & 0.3292 & 0.3252 & 0.1357 & 0.2110 & 0.4800 & 0.4599 & \textbf{0.1273} \\ \multirow{-6}{*}{\STAB{\rotatebox[origin=c]{90}{\scriptsize Tracking ($\sigma=0.1$)}}\phantom{AA}} 50 &0.7591 & 0.6819 & 0.9216 & 0.4105 & 0.4883 & 0.4811 & 0.3342 & 0.3639 & 0.5652 & 0.5930 & \textbf{0.3329} \\ \bottomrule \end{tabular} \label{tab:synth} } \end{table*} We are particularly interested in the case \eqref{eq:fmu} since, with this choice, it is known that the global minimizer of \eqref{eq:generalregformulation} is the same as that of $\mu {\mathrm{ rank}}(X) +\|\mathcal{A} X - b\|^2$ if $\|\mathcal{A}\| < 1$, see \cite{carlsson2016convexification} for a proof. Note that $f_\mu$ is a special case of the MCP class \cite{zhang2010nearly}. With this choice $\tilde{\mathcal{R}}(B,C)$ is differentiable and the second derivatives are also defined almost everywhere except in the transition $\frac{\|B_i\|^2+\|C_i\|^2}{2} = \sqrt{\mu}$ where the function switches from quadratic to constant. \begin{figure}[htb] \centering \resizebox{!}{50mm}{\input{bias_mod.tex}} \caption{Singular values obtained when minimizing $\|X-X_0\|_F^2$ with the four regularizers $\mathcal{R}(X)$ with $f=f_\mu$, $\|X\|_{1/2}^{1/2}$, $\|X\|_{2/3}^{2/3}$ and $\|X\|_*$. Large singular values are left unchanged by $\mathcal{R}$.} \label{fig:bias} \end{figure} We conclude this section by comparing the shrinking bias of our approach and three others that can also be optimized over the factorization. Theorem~\ref{thm:main} makes it possible to compute the global optimizer of $\tilde{\mathcal{R}}(B,C)+\|BC^T-X_0\|_F^2$ since the equivalent problem $\mathcal{R}(X)+\|X-X_0\|_F^2$ has closed form solution in the $X$-parameterization. It is shown in~\cite{larsson-olsson-ijcv-2016} that with $f=f_\mu$ the solution is obtained by thresholding the singular values at $\sqrt{\mu}$. Similarly, closed form solutions are also available when regularizing $\|X-X_0\|_F^2$ with $\|\cdot\|_{1/2}$, $\|\cdot\|_{2/3}$ and $\|\cdot\|_*$ \cite{shang-etal-2018}. In Figure~\ref{fig:bias} we show the singular values obtained when regularizing $\|X-X_0\|_F^2$ with these four options, and for comparison the singular values of $X_0$. For all methods we have selected regularization weights as small as possible so that the five smallest singular values are completely suppressed, which minimizes the bias. While all choices, except $\mathcal{R}$, subtract a part from the singular values that should be retained, the Schatten norms reduce the bias significantly compared to the nuclear norm. For the Schatten norms the bias is larger for singular values that are close to the threshold since the derivative of $\sigma^q$, $0<q<1$, decreases with increasing $\sigma$. For problem instances where there is a clear separation in size between singular values that should be retained and those that should be suppressed, it is likely that this can be done with negligible bias. Since $f'_\mu(\sigma)=0$ when $\sigma \geq \sqrt{\mu}$ this method does not affect the first five singular values. \section{Overparameterization and Optimality}\label{sec:optloc} The results of the previous section show that a global optimizer $(B,C)$ of \eqref{eq:generalbilin} gives a solution $BC^T$ which is globally optimal in~\eqref{eq:generalregformulation}. On the other hand, optimizing \eqref{eq:generalbilin} over $B$ and $C$ introduces additional stationary points, due to the non-linear parameterization, that are not present in \eqref{eq:generalregformulation}. One such point is $(B,C)=(0,0)$ where the gradients of $\|\mathcal{A} BC^T - b\|^2$ with respect to $B$ and $C$ vanish (in contrast to the gradient w.r.t.{}~$X$). In this section we show that by overparametrizing, in the sense that we use $B$ and $C$ with more columns than the rank of the solution we seek, it is still possible to use properties of \eqref{eq:generalregformulation} to show optimality in \eqref{eq:generalbilin}. We will exclusively use $f_\mu$ from \eqref{eq:fmu}, assume that $B$ and $C$ have $2k$ columns and study locally optimal solutions with ${\mathrm{ rank}}(BC^T)<k$. The size of $B$ and $C$ makes it possible to parametrize line segments between such points and utilize convexity properties, see proof of Theorem~\ref{thm:lowrank-opt}. The following result (which is proven in Appendix~\ref{sec:proofs}) gives conditions that ensure that local minimality in \eqref{eq:generalbilin} implies that \eqref{eq:generalregformulation} grows in all ``low rank'' directions. \begin{theorem}\label{thm:dirderiv} Assume that~$(\bar{B},\bar{C})\in \mathbb{R}^{m\times 2k} \times \mathbb{R}^{n\times 2k} $, where $\bar{B}=U\sqrt{\Sigma}$ and $\bar{C}=V\sqrt{\Sigma}$, and $\bar{X}=U\Sigma V^{T}$, is a local minimizer of \eqref{eq:generalbilin} with ${\mathrm{ rank}}(\bar{X})<k$ and let ${\mathcal{N}}(X) = \mathcal{R}(X)+\|\mathcal{A} X- b\|^2.$ Then $\mathcal{R}(\bar{X}) = \tilde{\mathcal{R}}(\bar{B},\bar{C})$ and the directional derivatives ${\mathcal{N}}'_{\Delta X}(\bar{X})$, where $\Delta X = \tilde{X}-\bar{X}$ and ${\mathrm{ rank}}(\tilde{X}) \leq k$, are non-negative. \end{theorem} Note that there can be local minimizers for which $\tilde{\mathcal{R}}(\bar{B},\bar{C}) > \mathcal{R}(\bar{B}\bar{C}^T)$ since $\tilde{\mathcal{R}}$ is non-convex. From an algorithmic point of view we can, however, escape such points by taking the current iterate and recompute the factorization of $\bar{B}\bar{C}^T$ using SVD. If the SVD of $\bar{B}\bar{C}^T= \sum_{i=1}^r \sigma_i U_i V_i^T$ we update $\bar{B}$ and $\bar{C}$ to $\bar{B}_i = \sqrt{\sigma_i}U_i$ and $\bar{C}_i = \sqrt{\sigma_i}V_i$, which we know reduces the energy and gives $\tilde{\mathcal{R}}(\bar{B},\bar{C}) = \mathcal{R}(\bar{B}\bar{C}^T)$. \begin{figure*}[t!] \centering \includegraphics[width=0.495\textwidth]{pOSE_door_best_vs_second_best.jpg}% \includegraphics[width=0.495\textwidth]{pOSE_vercingetorix_best_vs_second_best_v3.jpg}\\ \includegraphics[width=0.95\textwidth]{pOSE_combo_rank_vs_datafit_all_methods.png}% \caption{Comparison of reprojection error obtained using the bilinear formulation and ADMM, for datasets \emph{Door} and \emph{Vercingetorix}~\cite{olsson-engqvist-scia-2011}. The red circles mark the feature points and the green dots the projected image points obtained from the different methods. The best rank~4 solution for the respective method was used. The control parameter $\eta=0.5$ in both experiments.} \label{fig:pose_reproj} \end{figure*} Theorem~\ref{thm:dirderiv} allows us to derive optimality conditions using the properties of \eqref{eq:generalregformulation}. As a simple example, consider the case where $\|\mathcal{A} X\|^2 \geq \|X\|^2$, which makes \eqref{eq:generalregformulation} convex \cite{carlsson2016convexification}, and let $B$ and $C$ have $2k$ columns. Suppose that we find a local minimizer $(\bar{B},\bar{C})$ fulfilling the assumptions of Theorem~\ref{thm:dirderiv}. Then the derivative along a line segment towards any other low rank matrix is non-decreasing, and therefore $\bar{B}\bar{C}^T$ is the global optimum of \eqref{eq:generalregformulation} over the set of matrices with ${\mathrm{ rank}} \leq k$ by convexity. Below we give a result that goes beyond convexity and applies to the important class \cite{recht-etal-siam-2010} of problems that obey the RIP constraint \eqref{eq:RIP}. Let $\mathcal{A}^*$ denote the adjoint operator of~$\mathcal{A}$, then: \begin{theorem}\label{thm:lowrank-opt} Assume that $(\bar{B},\bar{C})$ is a local minimizer of \eqref{eq:generalbilin}, fulfilling the assumptions of Theorem~\ref{thm:dirderiv}. If the singular values of $Z = (I-\mathcal{A}^* \mathcal{A})\bar{B}\bar{C}^T+\mathcal{A}^*b$ fulfill $\sigma_i(Z) \notin [(1-\delta_{2k})\sqrt{\mu}, \frac{\sqrt{\mu}}{(1-\delta_{2k})}]$ then $\bar{B} \bar{C}^T$ is the solution of \eqref{eq:regminprobl} and \eqref{eq:rankminprobl}. \end{theorem} The proof builds on the results of \cite{olsson-etal-iccv-2017} and is given in Appendix~\ref{sec:proofs}. The assumption that the singular values of $Z$ are not too close to the threshold $\sqrt{\mu}$ is a natural restriction which is valid when the noise level is not too large. In case of exact data, \ie{}~$b = \mathcal{A} X_0$, where ${\mathrm{ rank}}(X_0) = r$ it is trivially fulfilled for any choice of $\mu$ such that $\sqrt{\mu}< (1-\delta_{2k})\sigma_r(X_0)$ since we have $Z=X_0$. For additional details on $Z$'s dependence on noise see \cite{carlsson2018unbiased}. The above result is similar in spirit to those of \cite{recht-etal-siam-2010,haeffele-vidal-arxiv-2017}, which show that, in the convex case, having $2k$ columns and rank $2k-1$ is enough to ensure that a local minimizer is global. For the proof in our non-convex case we need rank at most $k-1$. Presently, it is not clear if our assumption can be relaxed to match that of the convex case or not. \section{An Iterative Reweighted VarPro Algorithm}\label{sec:implement} In this section we give a brief overview of our algorithm for minimizing \eqref{eq:generalbilin}. A more detailed description is given in Appendix~\ref{sec:implementationdetails}. Given a current iterate, $B^{(t)}$ and $C^{(t)}$, the first step of our algorithm is to replace the term $\tilde{\mathcal{R}}(B,C)$ with a quadratic function. To do this we note that by the Taylor expansion $ f(x) \approx f(x_0)+f'(x_0)(x-x_0), $ minimizing $f(x)$ and $f'(x_0)x$ around $x_0$ is roughly the same (ignoring constants). Inserting $x_0 = \frac{\|B^{(t)}_i\|^2+\|C^{(t)}_i\|^2}{2}$ and $x = \frac{\|B_i\|^2+\|C_i\|^2}{2}$ now gives our approximation \begin{equation} \sum_{i=1}^k w^{(t)}_i(\|B_i\|^2+\|C_i\|^2)+\|\mathcal{A} B C^T - b\|^2, \label{eq:weighedapprox} \end{equation} where $w^{(t)}_i=\fr{1}{2}f'\!\left((\|B^{(t)}_i\|^2+\|C^{(t)}_i\|^2)/2\right)$. Here $B_i^{(t)}$ and $C_i^{(t)}$ are the $i$:th columns of $B^{(t)}$ and $C^{(t)}$, respectively. Minimizing \eqref{eq:weighedapprox} over $C$ is now a least squares problem with closed form solution. Inserting this solution into the original problem gives a nonlinear problem in $B$ alone, which is what VarPro solves. We use the so called Ruhe and Wedin (RW2) approximation with a dampening term $\lambda \|B-B^{(t)}\|_F^2$, see~\cite{hong-etal-cvpr-2017} for details. In each step of the VarPro algorithm we update the weights $w_i^{(t)}$. As previously mentioned, there can be stationary points for which $\tilde{\mathcal{R}}(B,C) > \mathcal{R}(B C^T)$. In each iteration we therefore take the current iterate and recompute the factorization of $B^{(t)}C^{(t)T}$ using SVD. If the SVD of $B^{(t)}C^{(t)T}= \sum_{i=1}^r \sigma_i U_i V_i^T$ we update $B^{(t)}$ and $C^{(t)}$ to $B^{(t)}_i = \sqrt{\sigma_i}U_i$ and $C^{(t)}_i = \sqrt{\sigma_i}V_i$ which we know reduces the energy and gives $\tilde{\mathcal{R}}(B^{(t)},C^{(t)}) = \mathcal{R}(B^{(t)}C^{(t)T})$. Our approach can be seen as iteratively reweighted nuclear norm minimization~\cite{canyi2015}; however, our bilinear formulation allows us to use quadratic approximation, thus benefiting from second order convergence in the neighborhood of a local minimum. \iffalse \begin{figure*}[t] \centering \def38mm{35.6mm} \setlength\tabcolsep{0.01cm} \begin{tabular}{ccccc} SCAD \cite{fan2001variable}: & Log \cite{friedman-2012}: & $f_{\mu}$: & ETP \cite{gao-etal-AAAI-2011}: & Geman \cite{geman-yang-1995}: \\ \includegraphics[width=38mm]{outputSCAD_energy} & \includegraphics[width=38mm]{outputLog_energy} & \includegraphics[width=38mm]{outputfmu_energy} & \includegraphics[width=38mm]{outputETP_energy} & \includegraphics[width=38mm]{outputGeman_energy} \\ \end{tabular} \caption{Energy minimization comparison for the synthetic experiment in Section~\ref{sec:synth}, for the missing data energy~\eqref{eq:missingdata2} and different robust penalties. Using the bilinear formulation (dashed green line) a smaller energy is obtained for all noise levels, except for Geman, in which case ADMM converges to a high rank solution.} \label{fig:synth_experiment} \end{figure*} \fi \section{Experiments} In this section we will show the versatility and strength of the proposed method, focusing on computer vision problems. In Section~\ref{sec:pOSE} we show an example where state-of-the-art methods fail to achieve a value close to global optimality. We include two more examples of real problems, in Appendix~\ref{sec:moreexp}: background extraction and photometric stereo. In both cases our method shows superior performance. In the main paper we focus on the trade-off between datafit and rank, but show, in the examples in the supplementray material, the added benefits of convergence speed using the proposed method. This is done by minimizing the same energy with ADMM and the proposed method, in which case the splitting schemes can be tediously slow. In all experiments our proposed method is initialized randomly, with zero mean and unit variance. \subsection{Synthetic Missing Data Problem}\label{sec:synth} Let~$\odot$ denote the Hadamard product, and consider the missing data formulation \begin{equation} \min_{\mat{X}}\mu{\mathrm{ rank}}(\mat{X}) + \norm{\mat{W}\odot(\mat{X}-\mat{M})}_F^2, \label{eq:missingdata2} \end{equation} where~$\mat{M}$ is a measurement matrix and~$\mat{W}$ a missing data mask with entries $w_{ij}=1$ if the entry is known, and zero otherwise. In low-level vision applications such as denoising and image inpainting a uniformly random missing data pattern is often a reasonable approximation of the distribution; however, for structure from motion, the missing data pattern is often highly structured. To this end, we investigate two kinds of patterns: uniformly random and ``tracking failure". In order to construct realistic patterns of tracking failure, we use the method in~\cite{larsson-olsson-cvpr-2017}. This is done by randomly selecting if a track should have missing data (with uniform probability), then select (with uniform probability, starting after the first few frames) in which image tracking failure occurs. If a track is lost, it is not restarted. \begin{figure}[h!] \centering \includegraphics[width=0.475\textwidth]{rank_vs_datafit_v2} \caption{Rank vs datafit for the synthetic experiment in Section~\ref{sec:synth}. No true low rank solution using LpSq~\cite{nie-2012} could be found, regardless of the choice of parameters.} \label{fig:synth_rank_vs_datafit} \end{figure} We generate random ground truth matrices~\mbox{$\mat{M}_{0}\in\mathbbm{R}^{32\times 512}$} of rank~4, which can be expressed as $\mat{M}_{0}=\mat{U}\mat{V}^{T}$, where $\mat{U}\in\mathbbm{R}^{32\times 4}$ and~$\mat{V}\in\mathbbm{R}^{512\times 4}$. The entries of $\mat{U}$ and $\mat{V}$ are normal distributed with zero mean and unit variance. The measurement matrix~$\mat{M}=\mat{M}_0+\mat{N}$, where $\mat{N}$ simulates noise and has normal distributed entries with zero mean and variance~$\sigma^2$. \iffalse All five robust penalties in Figure~\ref{fig:objfuns}, are used to relax~\eqref{eq:missingdata2}, which yields \begin{equation} \min_{\mat{X}}\mathcal{R}(\mat{X}) + \norm{\mat{W}\odot(\mat{X}-\mat{M})}_F^2, \end{equation} where~$\mathcal{R}(\mat{X})=\sum_if(\sigma_i(\mat{X}))$. As a special case of MCP we use~$f_\mu$, defined in~\eqref{eq:fdef}, and run the bilinear formulation to convergence. The ADMM equivalent is then given the same runtime in seconds. Note that the bilinear formulation and the ADMM equivalent minimize the same energy. In all tests, the number of columns for the bilinear methods are set to $k=8$. The results are shown in Figure~\ref{fig:synth_experiment}. The bilinear method is able to find a better optimum for all penalties and noise levels, with the exception for Geman and high noise levels, in which case ADMM converges to a high rank solution larger than~$k$. In such cases, the distance to ground truth of the solution obtained by ADMM is generally larger than for the bilinear methods. \fi \begin{figure*}[thb] \centering \setlength\tabcolsep{0.12cm} \def0.5{0.95} \def38mm{42.16mm} \def20mm{20mm} \newcommand{\vcenteredinclude}[1]{\begingroup \setbox0=\hbox{\includegraphics[width=20mm]{#1}}% \parbox{\wd0}{\box0}\endgroup} \begin{tabular}{cccc} \emph{Drink}\vcenteredinclude{mocap_drink.png} & \emph{Pickup}\vcenteredinclude{mocap_pickup.png} & \emph{Stretch}\vcenteredinclude{mocap_stretch.png} & \emph{Yoga}\vcenteredinclude{mocap_yoga.png} \vspace{-0.15cm} \\ \includegraphics[width=38mm]{all_methods_mocap_datafit_1} & \includegraphics[width=38mm]{all_methods_mocap_datafit_2} & \includegraphics[width=38mm]{all_methods_mocap_datafit_3} & \includegraphics[width=38mm]{all_methods_mocap_datafit_4} \\ \end{tabular} \caption{\emph{Top row:} Example frames from the MOCAP dataset of the \emph{drink}, \emph{pickup}, \emph{stretch} and \emph{yoga} sequences. \emph{Last row:} The bilinear method finds the same or a better datafit compared to the other methods for all ranks.} \label{fig:MOCAP} \vspace{-0.4cm} \end{figure*} \iffalse To get a better understanding of the energies of the synthetic experiment we show the rank and distance to ground truth, in~Figure~\ref{fig:synth_experiment}. For SCAD, Log and $f_\mu$ the bilinear consistently finds a small rank, except for very high noise levels $\sigma>0.2$. In all cases ADMM struggles to find a (sufficiently) low rank solution, and the distance to ground truth is significantly larger for ADMM. In the case of ETP, we see an impact of shrinking bias -- to get better performance one would have to use a different regularizing parameter~$\mu$. This is seen in the plots as ADMM consistently returns a full rank solution, and the bilinear method a rank~$k=8$ solution for noise levels $\sigma>0.1$. The main point, however, is that the energy minimizing of the bilinear method still performs better -- the other parameters, such as~$\mu$, and how well the problem formulation is able to reconstruct the ground truth, is secondary. A similar problem occurs for high noise levels with Geman, however, the shrinking bias is not as dominant. Note, that this phenomenon does not occur for Log, which, most likely, is due to the sublinearity, causing a smaller impact of the penalization of large singular values. \fi Our proposed method is compared to a variety of different methods~\cite{cabral-etal-iccv-2013,candes-etal-acm-2011,gu-2016,nie-2012,shang-etal-2018,canyi2015,toh-yun-2010,boyd-etal-2011,larsson-olsson-ijcv-2016}. For the methods that need an initial estimate of the rank as input, the rank estimation heuristic by Shang~\etal{}~\cite{shang-etal-2018} is used. The regularization parameter is set to $\lambda=\sqrt{\max(m,n)}$, given a sought $m\times n$ matrix, as proposed by~\cite{candes-etal-acm-2011,shang-etal-2018}. In case other parameters should be provided, the one recommended from the respective authors have been used. The number of columns, for our proposed method, is set to $k=8$, \ie{}~twice the rank of the original matrix~$M_0$. We exclusively use the~$f_\mu$ regularization~\eqref{eq:fmu}, and use $\sqrt{\mu}=\lambda$. Since $f_\mu$ is a special case of MCP, it is used for IRNN as well. Furthermore, we include the results for regularizing with nuclear norm~\cite{boyd-etal-2011} and $f_\mu$~\eqref{eq:fmu} using ADMM, as proposed in~\cite{larsson-olsson-ijcv-2016}. Note that ADMM comes without optimality guarantees, however, it has been shown to work well for several computer vision problems in practice~\cite{larsson-olsson-ijcv-2016,olsson-etal-iccv-2017}. Several of the compared methods solve the robust PCA problem, thus also include a sparse component, which is not taken into account. The results are shown in Table~\ref{tab:synth}. Note that most algorithms perform significantly better for the uniformly random missing data pattern, than compared to the structured missing data pattern. Our proposed method outperforms all other methods in this comparison. Since the final rank of the estimated matrix is not necessarily the same as that of~$M_0$, we show the rank vs datafit obtained when varying the regularization parameter~$\lambda$ in Figure~\ref{fig:synth_rank_vs_datafit}. It is evident from the results that the only candidates that yield an acceptable result for low rank solutions are ADMM with $f_\mu$, IRNN with MCP and our proposed method. \subsection{pOSE: Pseudo Object Space Error} \label{sec:pOSE} The Pseudo Object Space Error (pOSE) objective combines affine and projective camera models \begin{align} \ell_{\textsf{OSE}} &= \sum_{(i,j)\in\Omega} \norm{(\mat{P}_{i,1:2}\tilde{\vec{x}}_j-(\vec{p}^{T}_{i,3}\tilde{\vec{x}}_j)\vec{m}_{i,j}) }^2, \\ \ell_{\textsf{Affine}} &= \sum_{(i,j)\in\Omega} \norm{\mat{P}_{i,1:2}\tilde{\vec{x}}_j-\vec{m}_{i,j}}^2, \\ \ell_{\textsf{pOSE}} &= (1-\eta)\ell_{\textsf{OSE}}+\eta\ell_{\textsf{Affine}}, \end{align} where~$\ell_{\textsf{OSE}}$ is the object space error and~$\ell_{\textsf{Affine}}$ is the affine projection error. Here $\mat{P}_{i,1:2}$ denotes the first two rows, $\vec{p}_{i,3}$ the third row of the $i$:th camera matrix, and $\tilde{\vec{x}}_j$ is the $j$:th 3D point in homogeneous coordinates. The control parameter~$\eta\in[0,1]$ determines the impact of the respective camera model. This objective was introduced in~\cite{hong-zach-cvpr-2018} to be used in a first stage of an initialization-free bundle adjustment pipeline, optimized using VarPro. The~$\ell_{\textsf{pOSE}}$ objective is linear, and acts on low-rank components $\mat{P}$ and~$\mat{X}$, which are constrained by \mbox{${\mathrm{ rank}}(PX^{T})=4$}. Instead of enforcing the rank constraint, we replace it as before with a relaxation. By not enforcing the rank constraint we demonstrate the ability of the methods to make accurate trade-offs between minimizing the rank and fitting the data. Since the objective now becomes more complex, and is no longer compatible with the missing data formulations, only IRNN and APGL are directly applicable, as well as the ADMM approach using $f_\mu$ and nuclear norm. We use two real-life datasets with various amounts of camera locations and 3D points: \emph{Door} with 12 images, resulting in seeking a matrix of size $36\times 8850$ and \emph{Vercingetorix}~\cite{olsson-engqvist-scia-2011} with 69 images, resulting in seeking a matrix of size~$207\times 1148$, both of which have rank~4. \footnote{ The datasets are available here: \url{http://www.maths.lth.se/matematiklth/personal/calle/dataset/dataset.html}.} As in the synthetic experiment from Section~\ref{sec:synth}, the regularization parameter is varied and the resulting rank and datafit is stored and reported in Figure~\ref{fig:pose_reproj}. To visualize the results, we considered the best rank~4 approximations, and show the reprojected points and the corresponding measured points obtained from the best method (ours in both cases) and the second best (IRNN in both cases), see Figure~\ref{fig:pose_reproj}. As is readily seen by ocular inspection, the rank~4 solution obtained by our proposed method significantly outperforms those of other state-of-the-art methods. \iffalse As before, we let the bilinear method run until convergence, and let ADMM execute the same time in seconds. As a comparison we use the nuclear norm relaxation and the non-convex rank regularization. The results of the experiment are shown in Figure~\ref{fig:pose1}. \begin{figure}[h!] \centering \includegraphics[width=0.495\textwidth]{pOSE_door1_energy}\\ \caption{The average energy for the pOSE problem over 50 instances with random initializations, for test sequence \emph{Door}. (Note that the energy for ADMM-Rank and ADMM-$\mathcal{R}_\mu$ are very similar).} \label{fig:pose1} \end{figure} Note that the bilinear method optimizes the same energy as ADMM-$\mathcal{R}_\mu$, and that, despite the initial fast lowering of the objective value, the ADMM approach fails to reach the global optimum, within the allotted 150 seconds. This holds true for all methods employing ADMM. In all experiments, the control parameter $\eta=0.5$, and the~$\mu$ parameter was chosen to be smaller than all non-zero singular values of the best known optimum (obtained using VarPro). For a fair comparison, the $\mu$-value for the nuclear norm relaxation, was modified due to the shrinking bias, and was chosen to be the smallest value of~$\mu$ for which a solution with accurate rank was obtained. Due to this modification, the energy it minimizes is not directly correlated to the others, but is shown for completeness. Furthermore, the iteration speed of ADMM is significantly faster than for VarPro, and therefore we show the elapsed time (in seconds) for all methods. The reported values are averaged over 50 instances with random initialization. \fi \subsection{Non-Rigid Structure From Motion} In this section we test our approach on non-rigid reconstruction (NRSfM) with the CMU Motion Capture (MOCAP) dataset. In NRSfM, the complexity of the deformations are controlled by some mild assumptions of the object shapes. Bregler \etal{}~\cite{bregler-etal-cvpr-2000} suggested that the set of all possible configurations of the objects are spanned by a low dimensional linear basis of dimension~$K$. In this setting, the non-rigid shapes $X_i\in\mathbbm{R}^{3\times n}$ can be represented as $X_i=\sum_{k=1}^Kc_{ik}B_k$, where $B_k\in\mathbbm{R}^{3\times n}$ are the basis shapes and $c_{ik}\in\mathbbm{R}$ the shape coefficients. This way, the matrix~$X_i$ contains the world coordinates of point $i$, hence the observed image points are given by $x_i=R_iX_i$. We will assume orthographic cameras, \ie{} $R_i\in\mathbbm{R}^{2\times 3}$ where~$R_iR_i^{T}=I_2$. As proposed by Dai \etal{}~\cite{dai-etal-ijcv-2014}, the problem can be turned into a low-rank factorization problem by reshaping and stacking the non-rigid shapes~$X_i$. Let $X_i^\sharp\in\mathbbm{R}^{1\times 3n}$ denote the concatenation of the rows in~$X_i$, and create $X^\sharp\in\mathbbm{R}^{F\times 3n}$ by stacking~$X_i^\sharp$. This allows us to decompose the matrix~$X^\sharp$ in the low-rank factors $X^\sharp=CB^\sharp$, where $C\in\mathbbm{R}^{F\times K}$ contains the shape coefficients $c_{ik}$ and $B^\sharp\in\mathbbm{R}^{K\times 3n}$ is constructed as $X^\sharp$ and contains the basis elements. A suitable objective function is thus given by \begin{equation}\label{eq:mocap_objective} \mu{\mathrm{ rank}}(X^\sharp)+\norm{RX-M}^2_F, \end{equation} where~$R\in\mathbbm{R}^{2F\times 3F}$ is a block-diagonal matrix with the camera matrices $R_i$ on the main diagonal, $X\in\mathbbm{R}^{3F\times n}$ is the concatenation of the 3D points $X_i$, and $M\in\mathbbm{R}^{2F\times n}$ is the concatenated observed image points $x_i$. By replacing the rank penalty with a relaxation and minimize it using the proposed method and the methods used in the previous section. The regularization parameter is varied for the respective methods in order to obtain a rank 1--8 solution, and the respective datafit is reported in Figure~\ref{fig:MOCAP}, for four different sequences. In all sequences, the best datafit for each rank level is obtained by our proposed method. IRNN and ADMM using~$f_\mu$ is able to give the same, or very similar, datafit for lower ranks, but for solutions with rank larger than four our method consistently reports a lower value than the competing state-of-the-art methods. \iffalse We again replace the non-convex rank penalty with $\mathcal{R}_\mu$ and minimize it using the bilinear method and ADMM. As a comparison, we include the nuclear norm regularization. The results can be seen in Figure~\ref{fig:MOCAP}. Generally, ADMM performs well; and, in cases where the rank of the obtained solution coincides with the one obtained by the bilinear method, the difference in energy is negligible. In the cases, however, where the rank is not the same, ADMM tunes to the data more than the bilinear method. This is clearly shown in the figure, as for all values of~$\mu$---in all sequences---the rank of the final solution obtained by the bilinear method is smaller than or equal to the one obtained using ADMM. The distance to ground truth, however, is not the main interest of this paper, but rather the energy minimization step. In this case, the objective is not ideal for NRSfM, and, in some cases, promotes non-physical high-rank solutions. Other solutions, such as further penalizing the derivative of the 3D projections have been suggested to increase performance~\cite{dai-etal-ijcv-2014}. \fi \section{Conclusions} In this paper we presented a unification of bilinear parameterization and rank regularization. Robust penalties for rank regularization has often been used together with splitting schemes, but it has been shown that such methods yield unsatisfactory results for ill-posed problems in several computer vision applications. By using the bilinear formulation, the objective functions become differentiable, and convergence rates in the neighborhood of a local minimum are faster. Furthermore, we showed that theoretical optimality results known from the regularization formulations can be lifted to the bilinear formulation. Lastly, the generality of the proposed framework allows for a wide range of problems, some of which, have not been amenable by state-of-the-art methods, but have been proven successful using our proposed method. {\small \bibliographystyle{ieee}
2302.00438
\section{Conclusions and Future Work} \label{sec:conclusion} We investigated the extent to which DL-based code recommenders tend to synthesize different code components when starting from different but semantically equivalent natural language descriptions. We selected \emph{GitHub Copilot}\xspace as the tool representative of the state-of-the-art and asked it to generate 892\xspace non-trivial Java methods starting from their natural language description. For each method in our dataset we asked \emph{Copilot}\xspace to synthesize it using: (i) the \emph{original} description, extracted as the first sentence in the Javadoc; and (ii) \emph{paraphrased} descriptions. We did this both by manually modifying the \emph{original} description and by using automated paraphrasing tools, after having assessed their reliability in this context. We found that in $\sim$46\% of cases semantically equivalent but different method descriptions result in different code recommendations. We observed that some correct recommendations can only be obtained using one of the semantically equivalent descriptions as input. Our results highlight the importance of providing a proper code description when asking DL-based recommenders to synthesize code. In the new era of AI-supported programming, developers must learn how to properly describe the code components they are looking for to maximize the effectiveness of the AI support. Our future work will focus on answering our first research question \emph{in vivo} rather than \emph{in silico}. In other words, we aim at running a controlled experiment with developers to assess the impact of the different code descriptions they write on the received recommendations. Also, we will investigate how to customize the automatic paraphrasing techniques to further improve their performance on software-related text (such as methods' descriptions). \section{Introduction} \label{sec:intro} One of the long lasting dreams in software engineering research is the automated generation of source code. Towards this goal, several approaches have been proposed. The first attempts targeted the relatively simpler problem of code completion, that has been tackled exploiting historical information \cite{Robb2010a}, coding patterns mined from software repositories \cite{Hindle:icse2012,Nguyen:icse2012,Tu:fse2014,Asaduzzaman2014,Nguyen:msr2016,niu2017api,Hellendoorn:fse2017} and, more recently, Deep Learning (DL) models \cite{White2015,Karampatsis:DLareBest,kim2020code,alon2019structural,svyatkovskiy2020intellicode,CiniselliTse2021}. The release of \emph{GitHub Copilot}\xspace \cite{chen2021evaluating} pushed the capabilities of these tools to whole new levels. The large-scale training performed on the OpenAI's Codex model allows Copilot to not limit its recommendations to few code tokens/statements the developer is likely to write: Copilot is able to automatically synthesize entire functions just starting from their signature and natural language descriptions. This new generation of code recommender systems has the potential to change the way in which developers write code \cite{Ernst:sw2022} and comes with a number of questions concerning how to effectively exploit them to maximize developers' productivity. Intuitively, the ability of the developer to provide ``proper'' inputs to the model will become central to boost the effectiveness of its recommendations. In the concrete example of GitHub Copilot, the natural language description provided to the model to automatically generate a code function could substantially influence the model output. This means that two developers providing different natural language descriptions for the same function they would like to automatically generate could receive two different recommendations. While this would be fine in case the two descriptions are actually different in the semantics of what they describe, receiving different recommendations for \emph{semantically equivalent natural language descriptions} would pose questions on the robustness and usability of DL-based code recommenders. This is the main research question we investigate in this paper: We study the extent to which different semantically equivalent natural language descriptions of a function result in different recommendations (\emph{i.e.,}\xspace different synthesized functions) by GitHub Copilot. The latter is selected as representative of DL-based code recommenders since it is the \emph{de facto} state-of-the-art tool when it comes to code generation. We collected from an initial set of 1,401 open source projects a set of 892\xspace Java methods that are (i) accompanied by a Doc Comment for the Javadoc tool, and (ii) exercised by a test suite written by the project's contributors. Then, as done in the literature \cite{Hu:icpc2018,Li:fse2020}, we considered the first sentence of the Doc Comments as a ``natural language description'' of the method. We refer to this sentence as the ``\emph{original}'' description. We preliminarily checked whether existing automated paraphrasing techniques are suitable for robustness testing, \emph{i.e.,}\xspace if they can be used to create semantically equivalent descriptions of the methods to generate. We validated two state-of-the-art approaches in this scenario: PEGASUS \cite{zhang2019pegasus}, a DL-based paraphrasing tool, and Translation Pivoting (TP), a heuristic-based approach. We used both techniques to generate a paraphrase for each \emph{original} description in our dataset. Then, we manually inspected the obtained paraphrases and classified them as semantically equivalent or not. We obtained positive results for both the approaches, with TP being the best performing one with 77\% of valid paraphrases. Then, to answer our main research question, we generated different paraphrases for each \emph{original} description. \eject We used the two previously described automated approaches, \emph{i.e.,}\xspace PEGASUS and TP, and we also manually generated paraphrases by distributing the original descriptions among four of the authors, each of which was in charge of paraphrasing a subset of them. Therefore, for each \emph{original} description, we obtained a set of semantically equivalent \emph{paraphrased} descriptions. We provided both the \emph{original} and the \emph{paraphrased} descriptions as input to \emph{Copilot}\xspace, asking it to generate the corresponding method body. We analyze the percentage of cases in which the \emph{paraphrased} descriptions result in a different code prediction as compared to the \emph{original} one, with a particular focus on the impact on the prediction quality, \emph{e.g.,}\xspace cases in which the \emph{original} description resulted in the recommendation of a method passing its associated test cases while switching to a \emph{paraphrased} description made \emph{Copilot}\xspace recommending a method failing its related tests. Our results show that paraphrasing a description results in a change in the code recommendation in $\sim$46\% of cases. The resulting changes also cause substantial variations in the percentage of correct predictions. Such findings indicate the central role played by the model's input in the code recommendation and the need for testing and improving the robustness of DL-based code generators. Data and code used in our study are publicly available \cite{replication}. \section*{Acknowledgments} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 851720). \bibliographystyle{IEEEtranS} \section{Related Work} \label{sec:related} Recommender systems for software developers are tools supporting practitioners in daily activities~\cite{McMillan:tse2013, robillard:recommenders}, such as documentation writing and retrieval~\cite{Xie:msr2006,Moreno:icse2015,Moreno:tse2016,Xing:icpc2018}, refactoring~\cite{Bavota:emse2014,Tsantalis:saner2018}, bug triaging~\cite{Tamrawi:2011,Xia:tse2016}, bug fixing~\cite{goues:icse2012,Tufano:tosem2019,Li:icse2020}, etc.\xspace Among those, code recommenders, such as code completion tools, have became a crucial feature of modern Integrated Development Environments (IDEs) and support in speeding up code development by suggesting the developers code they are likely to write~\cite{Bruch:fse2009,kim2020code,Ciniselli2021}. Given the empirical nature of our work, that focuses on investigating a specific aspect of code recommenders, in this section we do not discuss all pervious works proposing novel or improving existing code recommenders (see \emph{e.g.,}\xspace~\cite{Xie:msr2006, Moreno:icse2015, Moreno:tse2016, goues:icse2012, Tufano:tosem2019, Li:icse2020, Wen:icse2021, Nguyen:fse2016, Liu:ase2020, Kim:ASE2009, Allamanis:fse2014, kim2020code, Karampatsis:DLareBest, Watson:icse2020, Tufano:asserts}). Instead, we focus on empirical studies looking at code recommenders from different perspectives (\secref{sub:rel1}) and on studies specifically focused on GitHub Copilot (\secref{sub:rel2}). \subsection{Empirical Studies on Code Recommenders} \label{sub:rel1} Proksch \emph{et~al.}\xspace~\cite{proksch2016evaluating} conducted an empirical study aimed at evaluating the performance of code recommenders when suggesting method calls. Their study has been run on a real-world dataset composed of developers' interactions captured in the IDE. Results showed that commonly used evaluation techniques based on synthetic datasets extracted by mining released code underperform due to a context miss. On a related research thread, Hellendoorn \emph{et~al.}\xspace~\cite{hellendoorn2019code} compared code completion models on both real-world and synthetic datasets. Confirming what observed by Proksch \emph{et~al.}\xspace, they found that the evaluated tools are less accurate on the real-world dataset, thus concluding that synthetic benchmarks are not representative enough. Moreover, they found that the accuracy of code completion tools substantially drops in challenging completion scenarios, in which developers would need them the most. M{\u{a}}r{\u{a}}șoiu \emph{et~al.}\xspace~\cite{muaruasoiu2015empirical} analyzed how practitioners rely on code completion during software development. The results showed that the users actually ignore many synthesized suggestions. Such a finding has been corroborated by Arrebola and Junior~\cite{arrebola2017source}, who stressed the need for augmenting code recommender systems with the development's context. Jin and Servant~\cite{jin2018hidden} and Li \emph{et~al.}\xspace \cite{li2021toward} investigated the \textit{hidden costs} of code recommendations. Jin and Servant found that IntelliSense, a code completion tool, sometimes underperforms by providing the suitable recommendation far from the top of the recommended list of solutions. Consequently, developers are discouraged from picking the right suggestion. Li \emph{et~al.}\xspace, aware of this potential issue, conducted a coding experiment in which they try to predict whether correct results are generated by code completion models, showing that their approach can reduce the percentage of false positives up to 70\%. Previous studies also assessed the actual usefulness of these tools. Xu \emph{et~al.}\xspace~\cite{xu2021inide} ran a controlled experiment with 31 developers who were asked to complete implementation tasks with and without the support of two code recommenders. They found a marginal gain in developers' productivity when using the code recommenders. Ciniselli \emph{et~al.}\xspace~\cite{CiniselliTse2021} empirically evaluated the performance of two state-of-the-art Transformer-based models in challenging coding scenarios, for example, when the code recommender is required to generate an entire code block (\emph{e.g.,}\xspace the body of a \smalltexttt{for} loop). The two experimented models, RoBERTa and Text-To-Text Transfer Transformer (T5), achieved good performance ($\sim$69\% of accuracy) in the more classic code completion scenario (\emph{i.e.,}\xspace predicting few tokens needed to finalize a statement), while reported a substantial drop of accuracy ($\sim$29\%) when dealing with the previously described more complex block-level completions. Our study is complementary to the ones discussed above. Indeed, we investigate the robustness of DL-based code recommenders supporting what it is know in the literature as ``\emph{natural language to source code translation}''. We show that semantically equivalent code descriptions can result in different recommendations, thus posing questions on the usability of these tools. \subsection{Empirical Studies on GitHub Copilot} \label{sub:rel2} GitHub \emph{Copilot}\xspace has been recently introduced as the state-of-the-art code recommender, and advertised as an ``AI pair programmer''~\cite{copilot,howard2021github}. Since its release, researchers started investigating its capabilities. Most of the previous research aimed at evaluating the impact of GitHub \emph{Copilot}\xspace on developers' productivity and its effectiveness (in terms of correctness of the provided solutions). Imai \cite{imai2022github} investigated to what extent \emph{Copilot}\xspace is actually a valid alternative to a human pair programmer. They observed that \emph{Copilot}\xspace results in increased productivity (\emph{i.e.,}\xspace number of added lines of code), but decreased quality in the produced code. Ziegler \emph{et~al.}\xspace \cite{ziegler2022productivity} conducted a case study in which they investigated whether usage measurements about \emph{Copilot}\xspace can predict developers' productivity. They found that the acceptance rate of the suggested solutions is the best predictor for perceived productivity. Vaithilingam \emph{et~al.}\xspace \cite{vaithilingam2022expectation} ran an experiment with 24 developers to understand how \emph{Copilot}\xspace can help developers complete programming tasks. Their results show that \emph{Copilot}\xspace does not improve the task completion time and success rate. However, developers report that they prefer to use \emph{Copilot}\xspace because it recommends code that can be used as a starting point and saves the effort of searching online. Nguyen and Nadi \cite{nguyen2022empirical} used LeetCode questions as input to \emph{Copilot}\xspace to evaluate the solutions provided for several programming languages in terms of correctness --- by running the test cases available in LeetCode --- and understandability --- by computing their Cyclomatic Complexity and Cognitive Complexity \cite{campbell2018cognitive}. They found notable differences among the programming languages in terms of correctness (between 57\%, for Java, and 27\%, for JavaScript). On the other hand, \emph{Copilot}\xspace generates solutions with low complexity for all the programming languages. While we also measure the effectiveness of the solutions suggested by \emph{Copilot}\xspace, our main focus is on understanding its robustness when different inputs are provided. Two previous studies aimed at evaluating the security of the solutions recommended by \emph{Copilot}\xspace. Hammond \emph{et~al.}\xspace \cite{pearce2021empirical} investigated the likelihood of receiving from \emph{Copilot}\xspace recommendations including code affected by security vulnerabilities. They observed that vulnerable code is recommended in 40\% of cases out of the completion scenarios they experimented with. On a similar note, Sobania \emph{et~al.}\xspace~\cite{sobania2021choose} evaluated GitHub \emph{Copilot}\xspace on standard program synthesis benchmark problems and compared the achieved results with those from the genetic programming literature. The authors found that the performance of the two approaches are comparable. However, approaches based on genetic programming are not mature enough to be deployed in practice, especially due to the time they require to synthesize solutions. In our study, we do not focus on security, but only on the correctness of the suggested solutions. Albert Ziegler, in a blog post about GitHub \emph{Copilot}\xspace\footnote{\url{https://docs.github.com/en/github/copilot/research-recitation}} investigated the extent to which the tool suggestions are copied from the training set they used. Ziegler reports that \emph{Copilot}\xspace rarely recommends verbatim copies of code taken from the training set. \section{Results Discussion} \label{sec:results} As previously explained, in RQ$_1$ we conducted our experiments both in the \emph{Full context}\xspace and in the \emph{Non-full context}\xspace scenario. Since the obtained findings are similar, due to space limitations we only discuss in the paper the results achieved in the \emph{Full context}\xspace scenario (\emph{i.e.,}\xspace the case in which we provide \emph{Copilot}\xspace with all code preceding and following the method object of the prediction). The results achieved in the \emph{Non-full context}\xspace scenario are available in our replication package \cite{replication}. \subsection{RQ$_0$: Evaluation of Automated Praphrase Generators} \begin{table}[h] \caption{Number of semantically equivalent or nonequivalent paraphrased descriptions obtained using PEGASUS and TP.} \resizebox{\linewidth}{!} { \begin{tabular}{l r r r} \toprule & Equivalent & Nonequivalent & Invalid \\ \midrule PEGASUS & 666 (74.7\%) & 225 (25.2\%) & 1 (0.1\%) \\ TP & 688 (77.1\%) & 104 (11.7\%) & 100 (11.2\%) \\ \bottomrule \end{tabular} } \label{tab:resultsRq2} \end{table} \tabref{tab:resultsRq2} reports the number of semantically equivalent and nonequivalent descriptions obtained using the two state-of-the-art paraphrasing techniques, namely PEGASUS and Translation Pivoting (TP), together with the number of invalid paraphrases generated. Out of the 892 \emph{original} descriptions on which they have been run, PEGASUS generated 666 (75\%) semantically equivalent descriptions, while TP went up to 688 (77\%). If we do not consider the invalid paraphrases, \emph{i.e.,}\xspace the cases for which the techniques do not actually provide any paraphrase, the latter obtains $\sim$87\% of correctly generated paraphrases. \eject These findings suggest that the two paraphrasing techniques can be adopted as testing tools to assess the robustness of DL-based code recommenders. In particular, once established a reference description (\emph{e.g.,}\xspace the \emph{original} description in our study), these tools can be applied to paraphrase it and verify whether, using the reference and the paraphrased descriptions, the code recommenders generate different predictions. \vspace{0.2cm} \begin{resultbox} \textbf{Answer to RQ$_0$.} State-of-the-art paraphrasing techniques can be used as starting point to test the robustness of DL-based code recommenders, since they are able to generate semantically equivalent descriptions of a reference text in up to 77\% of cases. \end{resultbox} \subsection{RQ$_1$: Robustness of GitHub Copilot} \begin{figure*}[!htp] \centering \includegraphics[width=0.9\linewidth]{fig/SE-Pegaus-Pivoting.pdf} \caption{Results achieved by Copilot when considering the \emph{Full context}\xspace code representation on \emph{paraphrases}$_{\mathit{PEGASUS}}$ and \emph{paraphrases}$_{\mathit{TP}}$.} \label{fig:full-context-pegasus-pivoting} \end{figure*} \textbf{Performance of Copilot when using the original and the paraphrased description as input.} \figref{fig:full-context-results-developer} summarizes the performance achieved by \emph{Copilot}\xspace when using the \emph{original} description (light blue) and the manually generated \emph{paraphrased} description (dark blue) as input. Similarly, we report in \figref{fig:full-context-pegasus-pivoting} the performance obtained when considering the paraphrases generated with the two automated techniques, \emph{i.e.,}\xspace PEGASUS and TP (top and bottom of \figref{fig:full-context-pegasus-pivoting}, respectively). It is worth noticing that, in the latter, we only considered in the analysis the paraphrases manually considered as equivalent in RQ$_0$, \emph{i.e.,}\xspace 666 for PEGASUS and 688 for TP. A first interesting result is that, as it can be noticed from \figref{fig:full-context-results-developer} and \figref{fig:full-context-pegasus-pivoting}, the results obtained with the three methodologies are very similar. For this reason, to avoid repetitions, in the following, we will mainly focus on the results obtained with the manually generated paraphrases. Also, as we will discuss, the quality of \emph{Copilot}\xspace's recommendations is very similar when using the \emph{original} and the \emph{paraphrased} descriptions. In \figref{fig:full-context-results-developer}, the bar chart in the left side reports the number of methods recommended by \emph{Copilot}\xspace (out of 892) that resulted in failing tests, passing tests, syntactic errors, and no (\emph{i.e.,}\xspace empty) recommendation. Looking at such a chart, the first thing that leaps to the eyes is the high percentage of Java methods ($\sim$73\% for the \emph{original} and $\sim$72\% for the \emph{paraphrased} description) for which \emph{Copilot}\xspace was not able to synthesize a method passing the related unit tests. Only $\sim$13\% of instances (112 and 122 depending on the used description) resulted in test-passing methods. While such a result seems to indicate limited performance of \emph{Copilot}\xspace, it must be considered the difficulty of the code generation tasks involved in our study. Indeed, we did not ask \emph{Copilot}\xspace to generate simple methods possibly implementing quite popular routines (\emph{e.g.,}\xspace a method to generate an MD5 hash from a string) but rather randomly selected methods that, as shown in \tabref{tab:dataset}, are composed, on average, by more than 150 tokens (median = 92) and have an average cyclomatic complexity of 5.3 (median = 3.0). Thus, we consider the successful generation of more than 110 of these methods a quite impressive result for a code recommender. The remaining $\sim$15\% of instances resulted either in a parsing error ($\sim$100 methods) or in an empty recommendation ($\sim$30 methods). The box plot in the middle part of \figref{fig:full-context-results-developer} depicts the results achieved in terms of CodeBLEU \cite{Ren:codebleu} computed between the recommended methods and the target one (\emph{i.e.,}\xspace the one implemented by the original developers). Higher values indicate higher similarity between the compared methods. Instead, in the right box plot, we show the normalized Levenshtein distance, for which lower values indicate higher similarity. For both metrics, we depict the distributions when considering all generated predictions, the ones failing tests, and the ones passing tests. As expected, higher (lower) values of CodeBLEU (Levenshtein distance) are associated with test-passing methods. Indeed, for the latter, the median CodeBLEU is $\sim$0.80 (Levenshtein = $\sim$0.10) as compared to the $\sim$0.40 (Levenshtein = $\sim$0.58) of test-failing methods. Despite such an expected finding, it is interesting to notice that 25\% of test-passing methods have a rather low CodeBLEU $<$0.50. \figref{fig:low-code-bleu} shows an example of recommended method having a CodeBLEU with the target method of 0.45 and passing the related tests. The recommended method, while substantially different from the target, captures the basic logic implemented in it. The target method first checks if the object \smalltexttt{chemObjectListeners} is \smalltexttt{null} and, if not, it proceeds removing from the \smalltexttt{listeners} list the element matching the one provided as parameter (\emph{i.e.,}\xspace \smalltexttt{col}). The method synthesized by \emph{Copilot}\xspace avoids the second \smalltexttt{if} statement by directly performing the remove operation after the \smalltexttt{null} check. Note that there the two implementations are equivalent: The \smalltexttt{remove} method of \smalltexttt{java.util.List} preliminarily checks whether the passed element is contained in the list before removing it. While the check in the original method has no functional role, together with the introduction of the \smalltexttt{listeners} variable, it might have been introduced to make the method more readable and self-explanatory. \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{fig/low-code-bleu.pdf} \caption{Example of recommended method that passes the unit tests but reports a low CodeBLEU score compared to the oracle (\emph{i.e.,}\xspace target method).} \label{fig:low-code-bleu} \end{figure} \eject \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{fig/high-levenshtein.pdf} \caption{Example of recommended methods that pass the unit tests but would require 165 edit actions to match the target method.} \label{fig:high-levenshtein} \end{figure} Similarly, \figref{fig:high-levenshtein} shows an example of prediction passing the tests but that, accordingly to the Levenshtein distance, would require 165 token-level edits to match the target prediction (NTLev=63\%). Differently from the previous example, it is clear that, in this case, the two methods do not have the same behavior since the recommended one also treats 3D points, while the original one only 2D points. In other words, the tests fail to capture the difference in the behavior. These examples provide two interesting observations. The first is that, metrics such as CodeBLEU and Levenshtein distance may result in substantially wrong assessments of the quality of a prediction. Indeed, while the discussed predictions have low CodeBLEU/high Levenshtein values and, thus, would be considered as unsuccessful predictions in most of the empirical evaluations, it is clear that they are valuable recommendations for a developer, even when not 100\% correct (see \figref{fig:high-levenshtein}). This poses questions on the usage of these metrics in the evaluation of code recommenders. Second, also the testing-based evaluation shows, as expected, some limitations as in the second example, in which the two methods do not implement the same behavior but both pass the tests. \eject As a final note, it is also interesting to observe as 25\% of test-failing predictions exhibit high values ($>\sim$0.60) of CodeBLEU, indicating a high code similarity that, however, does not reflect in test-passing recommendations. \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{fig/changes-overall.pdf} \caption{Levenshtein distance between the \emph{original} description and (i) the manually \emph{paraphrased} descriptions (left part) and (ii) the descriptions automatically paraphrased by PEGASUS (middle part) and Translate Pivoting (right). Similarly, we report the Levenshtein distance between the method recommended using the \emph{original} description and the three paraphrases. The latter is only computed for recommendations in which the obtained output differs.} \label{fig:changes-overall} \end{figure*} \textbf{Impact of paraphrasing the input descriptions.} Out of the 892 manually paraphrased descriptions, 408 (46\%) result in different code recommendations as compared to the \emph{original} description. This means that \emph{Copilot}\xspace synthesizes different methods when it is provided as input with the \emph{original} description and with the manually \emph{paraphrased} description, which are supposed to summarize the same piece of code. Note that at this stage we are not focusing on the ``quality'' of the obtained predictions in any way. We are just observing that different input descriptions have indeed an impact on the recommended code. This implies that developers using different wordings to describe a needed method may end up with different recommendations. Such differences also result in the potential loss of correct recommendations. Indeed, out of the 112 test-passing predictions obtained with the \emph{original} description and the 122 obtained with the manually \emph{paraphrased} description, only 98 are in overlap, indicating that there are 38 correct recommendations only generated either by the \emph{original} (14) or the \emph{paraphrased} (24) description. To have a deeper look into the 408 different predictions generated by \emph{Copilot}\xspace with the \emph{original} and the \emph{paraphrased} description, the left part of \figref{fig:changes-overall} (light blue) shows the normalized token-level Levenshtein distance between (i) the \emph{original} description and the \emph{paraphrased} description (see the boxplot labeled with ``Description''), and (ii) the method obtained using the \emph{original} description and that recommended using the \emph{paraphrased} description (``Code''). The ``Description'' boxplot depicts the percentage of words that must be changed to convert the \emph{paraphrased} description into the \emph{original} one. As it can be seen, while describing the same method, the \emph{paraphrased} descriptions can be substantially different as compared to the \emph{original} ones, with 50\% of them requiring changes to more than 70\% of their words. Similarly, the different methods recommended in the 408 cases under analysis, can be substantially different, with a median of $\sim$30\% of code tokes that must be changed to convert the recommendation obtained with the \emph{original} description into the one obtained using the \emph{paraphrased} description (see the ``Code'' boxplot). These findings are confirmed for the automatically paraphrased descriptions (see the middle and the right part of \figref{fig:changes-overall} for the results achieved with the PEGASUS and TP paraphrases, respectively). As it can be seen, the main difference as compared to the results of the manually paraphrased description (left part of \figref{fig:changes-overall}) is that TP changes a substantially lower number of words in the \emph{original} description as compared to PEGASUS and to the manual paraphrasing. Such a finding is expected considering that TP just translates the \emph{original} description back and forth from English to French, thus rarely adding new words to the sentence, something that is likely to happen using PEGASUS or by paraphrasing the sentence manually. \vspace{0.2cm} \begin{resultbox} \textbf{Answer to RQ$_1$.} Different (but semantically equivalent) natural language descriptions of the same method are likely to result in different code recommendations generated by DL-based code generation models. Such differences can result in a loss of correct recommendations ($\sim$28\% of test-passing methods can only be obtained either with the \emph{original} or the \emph{paraphrased} descriptions). These findings suggest that testing the robustness of DL-based code recommenders may play an important role in ensuring their usability and in defining possible guidelines for the developers using them. \end{resultbox} \section{Study Design} \label{sec:study} The \textit{goal} of our study is to understand how robust is a state-of-the-art DL-based code completion approach (\emph{i.e.,}\xspace \emph{GitHub Copilot}\xspace). We aim at answering the following research questions: \smallskip \textbf{RQ$_0$: To what extent can automated paraphrasing techniques be used to test the robustness of DL-based code generators?} Not always natural language processing techniques can be used out of the box on software-related text \cite{Lin:icse2018}. Therefore, with this preliminary RQ, we want to understand whether existing automated techniques for generating natural language paraphrases are suitable for SE task at hand (\emph{i.e.,}\xspace paraphrasing a function description). \smallskip \textbf{RQ$_1$: To what extent is the output of GitHub Copilot influenced by the code description provided as input by the developer?} This RQ aims at understanding whether \emph{Copilot}\xspace, as a representative of DL-based code generators, is likely to generate different recommendations for different semantically equivalent natural language descriptions provided as input. \smallskip In the following we detail the context for our study (\secref{sec:context_selection}) and how we collected (\secref{sub:collection}) and analyzed (\secref{sub:analysis}) the data needed to answer our RQs. \subsection{Context Selection} \label{sec:context_selection} The context of our study is represented by 892\xspace Java methods collected through the following process. We selected all GitHub Java repositories having at least 300 commits, 50 contributors, and 25 stars. These filters have been used in an attempt to exclude personal/toy projects. We also excluded forked projects to avoid duplicates. The decision to focus on a single programming language aimed instead at simplifying the non-trivial toolchain needed to run our study. The whole repositories selection process has been performed using the GitHub search tool by Dabic \emph{et~al.}\xspace \cite{Dabic:msr2021data}. At this stage, we obtained 1,401 repositories. In our experimental design, we use the passing/failing tests as a proxy to assess the correctness of the predictions generated by \emph{Copilot}\xspace. Thus, we need the projects to use a testing framework and to be compilable. We selected all projects that used Maven as build automation tool and for which the build of their latest release succeeded. We obtained 214 repository. By parsing the POM (Project Object Model) file\footnote{POM files are used in Maven to declare dependencies towards libraries.} we only considered projects having as dependencies both jUnit \cite{junit} --- a well-known unit testing framework --- and Jacoco \cite{jacoco} --- a code coverage library. We analyzed the Jacoco reports and selected as methods subject of our experiment those having at least 75\% of statement coverage. This gives us confidence that the related test cases exercise an acceptable number of behaviors and, therefore, could allow to spot cases in which different generated functions for semantically-equivalent descriptions actually behave differently. We are aware that passing tests does not imply correctness. We discuss this aspect in \secref{sec:threats}. Given our goal to use the method's description as input for \emph{Copilot}\xspace, we also exclude methods not having any associated Doc Comment for the Javadoc tool. Then, we process the Doc Comment of each method in our dataset to extract from it the first sentence (\emph{i.e.,}\xspace from the beginning to the first ``.''). This is the same approach used in the literature when building datasets aimed at training DL-based techniques for Java code summarization (see \emph{e.g.,}\xspace \cite{Hu:icpc2018,Li:fse2020}), with the training set composed by pairs $<$\smalltexttt{method, code\_description}$>$, with the latter being the first sentence of the Doc Comment. To ensure that the extracted sentence contains enough wording for the code description, we exclude all methods having less than 10 tokens in the extracted first sentence, since their description may not be sufficient for synthesizing the method. \begin{table}[h] \centering \caption{Our dataset of 892\xspace methods from 33 repositories} \begin{tabular}{l|rrr} \toprule & \textbf{Avg} & \textbf{Median} & \textbf{St. Dev.}\\ \midrule \textbf{\# Tokens} & 154.3 & 92.0 & 218.2 \\ \textbf{\# Parameters} & 1.6 & 1.0 & 1.2 \\ \textbf{\# Cyclomatic Complexity} & 5.3 & 3.0 & 7.6 \\\midrule \textbf{\% Coverage} & 96.1 & 100.0 & 6.7 \\ \bottomrule \end{tabular} \label{tab:dataset} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fig/context-copilot.pdf} \caption{\emph{GitHub Copilot's} input for both code context representations} \vspace{-0.3cm} \label{fig:context-copilot} \end{figure} The above-described process resulted in the collection of 892\xspace Java methods. \tabref{tab:dataset} shows descriptive statistics about their characteristics in terms of number of tokens, parameters and cyclomatic complexity. These three together provide an idea about the complexity of the task \emph{Copilot}\xspace was asked to perform (\emph{i.e.,}\xspace the complexity of the methods it had to generate). Statistics about the coverage show, instead, the by-design high statement coverage we ensure for the included methods. \subsection{Data Collection} \label{sub:collection} To address \textbf{RQ$_0$}, we experiment with two state-of-the-art paraphrasing techniques. The first is named PEGASUS \cite{zhang2019pegasus}, and it is a sequence-to-sequence DL model pre-trained using self-supervised objectives specifically tailored for abstractive text summarization and fine-tuned for the task of paraphrasing \cite{pegasus}. As for the second technique, we opted for Translation Pivoting (TP). Such a technique relies on natural language translation services to translates the \emph{original} description $o$ from English into a foreign language (\emph{i.e.,}\xspace French), obtaining $o{E \rightarrow F}$. Then, $o{E \rightarrow F}$ is translated back in the original language ($o_{E \rightarrow F \rightarrow E}$) obtaining a paraphrase. We provide each technique with the \emph{original} description as input. TP failed to generate a valid paraphrase (\emph{i.e.,}\xspace a sentence different from the original one) in 100 cases (out of 892\xspace), while this only happened once with PEGASUS. We manually analyzed whether the valid paraphrases we obtained were actually semantically equivalent to the \emph{original} description. For such a process, each of the 1,683 paraphrases (892\xspace for each of the two tools minus the 101 invalid ones) has been independently inspected by two authors who classified it as semantically equivalent or not. Conflicts, that arisen in 11.9\% (PEGASUS) and 16.54\% (TP) of cases, have been solved by a third author not involved in the first place. Concerning \textbf{RQ$_1$}, we start from the \textit{original} description and we generate semantically equivalent descriptions by (i) using the two automated tools, \emph{i.e.,}\xspace PEGASUS \cite{pegasus} and TP, and (ii) manually generating paraphrases. For the manual paraphrasing, we split the 892\xspace methods together with their \emph{original} description into four sets and assigned each of them to one author. Each author was in charge of writing a semantically equivalent but different description of the method by looking at its code and \emph{original} description. This resulted in a dataset (available in \cite{replication}) in which, for each subject method, we have its \emph{original} and \emph{paraphrased} description. In the end, for each \emph{original} sentence, we had between one and three paraphrases: \emph{paraphrased}$_{\mathit{PEGASUS}}$, \emph{paraphrased}$_{\mathit{TP}}$, and \emph{paraphrased}$_{\mathit{manual}}$. While \emph{paraphrased}$_{\mathit{manual}}$ is available for all the methods, \emph{paraphrased}$_{\mathit{PEGASUS}}$ and \emph{paraphrased}$_{\mathit{TP}}$ are not. Indeed, we exclude the cases in which each of such tools failed to generate paraphrases (1 and 100, respectively) and the ones that were not considered as semantically equivalent in our manual check (based on the results of RQ$_0$). The maximum number of semantically equivalent paraphrases is 2,575 (up to 891 with PEGASUS, up to 792 with TP, and 892 manually). The paraphrases, as well as the \textit{original} description, have been used as input to \emph{Copilot}\xspace, simulating developers asking it to synthesize the same Java method by using different natural language descriptions. At the time of our study, \emph{Copilot}\xspace does not provide open APIs to access its services. The only way to use it is through a plugin for one of the supported IDEs. Manually invoking \emph{Copilot}\xspace for the thousands of times needed (up to 6,934, as we will explain later) was clearly not an option. For this reason, we developed a toolchain able to automatically invoke \emph{Copilot}\xspace on the subject instances: We exploit the AppleScript language to automate this task on a MacBook Pro, simulating the developer's interaction with Visual Studio Code (\emph{vscode}). For each method $m_i$ in our dataset, we created up to four different versions of the Java file containing it (one for each of the experimented descriptions). In all such versions, we (i) emptied $m_i$'s body, just leaving the opening and closing curly bracket delimiting it; and (ii) removed the Doc Comment, replacing it with one of the four code descriptions we prepared. Starting from these files, the automation script we implemented (available in our replication package \cite{replication}) performs the following steps on each file $F_i$. First, it opens $F_i$ in \emph{vscode} and moves the cursor within the curly brackets of the method $m_i$ of interest. Then, it presses ``\smalltexttt{return}'' to invoke \emph{Copilot}\xspace, waiting up to 20 seconds for its recommendation. Finally, it stores the received recommendation, that could possibly be empty (\emph{i.e.,}\xspace no recommendation received). To better understand this process, the top part of \figref{fig:context-copilot} depicts how the invocation of \emph{Copilot}\xspace is performed. The gray box represents the whole Java file (\emph{i.e.,}\xspace the context used by \emph{Copilot}\xspace for the prediction). The emptied method (\emph{i.e.,}\xspace \smalltexttt{getEmbeddings}) is framed with a black border, with the cursor indicating the position in which \emph{Copilot}\xspace is invoked. The green comment on top of the method represents one of the descriptions we created. As it can be seen, \figref{fig:context-copilot} includes for the same Java file two different scenarios, named \emph{Full context}\xspace and \emph{Non-full context}\xspace. In the \emph{Full context}\xspace scenario (top part of \figref{fig:context-copilot}) we provide \emph{Copilot}\xspace with the code \textbf{preceding and following} the emptied method, simulating a developer adding a new method in an already existing Java file. In the \emph{Non-full context}\xspace scenario, instead, we only provide as context the code preceding the emptied method (bottom part of \figref{fig:context-copilot}), simulating a developer writing a Java file sequentially and implementing a new method. The basic idea behind these two scenarios is that the contextual information provided to \emph{Copilot}\xspace can play a role in its ability to predict the emptied method. Overall, the maximum number of \emph{Copilot}\xspace invocations needed for our study is 6,934 (892\xspace \textit{original} descriptions plus up to 2,575 paraphrases, each of which for 2 context scenarios). After having collected \emph{Copilot}\xspace's recommendations, we found out that sometimes they did not only include the method we asked to generate, but also additional code (\emph{e.g.,}\xspace other methods). To simplify the data analysis and to make sure we only consider one recommended method, we wrote a simple parsing tool to only extract from the generated recommendation the first valid method (if any). \subsection{Data Analysis} \label{sub:analysis} Concerning \textbf{RQ$_0$}, we report the number and the percentage of 892\xspace methods for which automatically generated paraphrases (\emph{i.e.,}\xspace those generated by PEGASUS and by TP) have been classified as semantically equivalent to the \emph{original} description. This provides an idea of how reliable these tools are when used for testing the robustness of DL-based code generators. Also, this analysis allows to exclude from RQ$_1$ automatically generated paraphrases that are not semantically equivalent. To answer \textbf{RQ$_1$}, we preliminarily assess how far the paraphrased descriptions are from the original ones (\emph{i.e.,}\xspace the percentage of changed words) by computing the normalized token-level Levenshtein distance \cite{levenshtein1966binary} (NTLev) between the \emph{original} ($d_o$) and any \emph{paraphrased} description ($d_p$): $$ NTLev(d_o, d_p) = \frac{\mathit{TLev}(d_o, d_p)}{\max({\{}|d_o|, |d_p|\})} $$ \noindent with $\mathit{TLev}$ representing the token-level Levenshtein distance between the two descriptions. While the original Levenshtein distance works at character-level, it can be easily generalized at token-level (each unique token is represented as a specific character). In this case, a token is a word in the text. The normalized token-level Levenshtein distance provides an indication of the percentage of words that must be changed in the \emph{original} description to obtain a \emph{paraphrased} one. Then, we analyze the percentage of methods for which the \emph{paraphrased} descriptions result in a different method prediction as compared to the \emph{original} one. When they are different, we also assess how far the methods obtained by using a given \emph{paraphrased} description is from the method recommended when providing the \emph{original} description as input. Also in this case we use the token-level Levenshtein distance as metric. The latter is computed with the same formula previously reported for the natural text descriptions; in this case, however, the tokens are not the words but the Java syntactic tokens. Thus, NTLev indicates in this case the percentage of code tokens that must be changed to convert the method obtained through the \emph{original} description into the one recommended with one of the paraphrases. Finally, we study the ``quality'' of the recommendations obtained using the different descriptions both in the \emph{Full context}\xspace and \emph{Non-full context}\xspace scenarios. Given the sets of methods generated from the \textit{original} description and each of the paraphrasing approach considered, we present the percentages of methods for which \emph{Copilot}\xspace: (i) synthesized a method passing all the related test cases (\textit{PASS}); (ii) synthesized a method that does not pass at least one of the test cases (\textit{FAIL}); (iii) generated an invalid method (\emph{i.e.,}\xspace with syntactic errors) (\textit{ERROR}); (iv) did not generate any method (\textit{EMPTY}). Syntactic errors have been identified as recommendations for which \emph{Java Parser} \cite{java_parser} did not manage to identify a valid recommended method (\emph{i.e.,}\xspace cases in which \emph{Java Parser} fails to identify a method node in the AST generated for the obtained recommendation). On top of the passing/failing methods, we also compute the token-level Levenshtein distance and the CodeBLEU \cite{Ren:codebleu} between each synthesized method and the target one (\emph{i.e.,}\xspace the one originally implemented by the developers). CodeBLEU measures how similar two methods are. Differently from the BLEU score \cite{papineni2002bleu}, CodeBLEU evaluates the predicted code considering not only the overlapping $n$-grams but also syntactic and semantic match of the two pieces of code (predicted and reference) \cite{Ren:codebleu}. \subsection{Replication Package} \label{sub:replication} The code and data used in our study are publicly available \cite{replication}. In particular, we provide (i) the dataset of manually defined and automatically generated paraphrases; (ii) the AppleScript code used to automate the \emph{Copilot}\xspace triggering; (iii) the code used to compute the CodeBLEU and the Levenshtein distance; (iv) the dataset of 892\xspace methods and related tests used in our study; (v) the scripts used to automatically generate the paraphrased descriptions using PEGASUS and TP; and (vi) all raw data output of our experiments. \section{Threats to Validity} \label{sec:threats} Threats to \emph{construct validity} concern the relationship between the theory and what we observe. Concerning the performed measurements, we exploit the passing tests as a proxy for the correctness of the recommendations generated by \emph{Copilot}\xspace. We acknowledge that passing tests does not imply code correctness. However, this it can provide hints about the code behavior. To partially address this threat we focused our study on methods having high statement coverage (median = 100\%). Also, we complemented this analysis with the CodeBLEU and the normalized token-level Levenshtein distance. As for the execution of our study, we automatically invoked \emph{Copilot}\xspace rather than using it as actual developers would do: We automatically accepted the whole recommendations and did not simulate a scenario in which a developer selects only parts of the provided recommendations. In other words, while our automated script simulates a developer invoking \emph{Copilot}\xspace for help, it cannot simulate the different usages a developer can make of the received code recommendation. Threats to \emph{internal validity} concern factors, internal to our study, that could affect our results. While in RQ$_2$ we had multiple authors inspecting the semantic equivalence of the paraphrasing generated by the automated tools, in RQ$_1$ we relied on a single author to paraphrase the \emph{original} description. This introduces some form of subjectivity bias. However, the whole point of our paper is that, indeed, subjectivity plays a role in the natural language description of a function to generate and we are confident that the written descriptions were indeed semantically equivalent to the \emph{original} one. Indeed, the authors involved in the manual paraphrasing have an average of seven years of experience in Java. Also related to internal validity is our choice of using the first sentence of the Doc Comments as the \emph{original} natural language description. These sentences may be of low quality and not representative of how a developer would describe a method they want to automatically generate. This could substantially influence our findings, especially in terms of the effectiveness of \emph{Copilot}\xspace (\emph{i.e.,}\xspace its ability to generate test-passing methods). However, such a threat is at least mitigated by the fact that \emph{Copilot}\xspace has also been invoked using the manually written descriptions, showing a similar effectiveness. A final threat regards the projects used for our study. Those are open-source projects from GitHub, and it is likely that at least some of them have been used for training Copilot itself. In other words, the absolute actual effectiveness reported might not be reliable. However, the objective of our study is to understand the differences when different paraphrases are used rather than the absolute performance of Copilot, like previous studies did (\emph{e.g.,}\xspace \cite{nguyen2022empirical}). Threats to \emph{external validity} are related to the possibility to generalize our results. Our study has been run on 892\xspace methods we carefully selected as explained in \secref{sec:context_selection}. Rather than going large-scale, we preferred to focus on methods having a high test coverage and a verbose first sentence in the Doc Comment. Larger investigations are needed to corroborate or contradict our findings. Similarly, we only focused on Java methods, given the effort required to implement the toolchain needed for our study, and in particular the script to automatically invoke \emph{Copilot}\xspace and parse its output. Running the same experiment with other languages is part of our future agenda.
1906.07993
\section{Introduction} When humans are data sources for researchers, the research community has to respect them and protect them from harm. However, this has not always been, and still is not always, the case in research. A study published in early 2019 investigated questionable ethics in research on Chinese transplant recipients \cite{rogers2019,caplan2011}. The authors request the immediate retraction of a large body of papers because of the poor ethical principles behind the studies (e.g., using organs from executed prisoners). In addition to physical risks like disease or death, researchers might cause other types of harm, such as risks with respect to privacy, personal values, or family links (e.g., if they expose illegal, sexual or deviant behavior) \cite{carusi2009}. Related to the field of software engineering, many data sources may cause harm to an individual: exposure of financial data, message history or dating app logs are some examples. When industry practitioners participate in research, they might expose information that could cause harm not only to individuals, but also companies, e.g.,\ by mentioning quality shortcomings. Software has a profound impact on almost every aspect of society, making ethics in software engineering research an important topic. A common research method for data collection is the \emph{interview}. Singer et al.\ describe the interview as a method where at least one researcher talks to at least one interviewee \cite{singer2008}. Two common approaches to interviews are structured and semi-structured interviews: in the former the researcher asks all questions in the same way, and in the same order, to all interviewees. In the latter some flexibility is allowed. Interviews may be time consuming, but are suitable for many types of research methods and philosophical traditions. Many guidelines exist for empirical software engineering research, such as the books by Kitchenham et al.\ \cite{kitchenham2015}, Runeson et al.\ \cite{runeson2012}, and Shull et al.\ \cite{shull2008}. However, these often lack how to conduct interviews and handle interview artifacts with respect to ethical considerations. In this paper we fill this gap by continuing the tradition to transfer guidelines from medicine into the field of software engineering. First, in Section~\ref{background-and-related-work} we review existing guidelines for ethical research, guidelines for software engineering, and revisit an interview study we recently finalized. Second, in Section~\ref{interview-life-cycle} we investigate the research interview and consider ethical aspects in each element. This section contains the main contribution of this paper: guidelines in the form of check\-lists, that simplifies ethical research, in particular with respect to how to practically anonymize and work with interview data. Finally, in Sections~\ref{summary} and~\ref{conclusion} we summarize, discuss and conclude this paper. \begin{table*}[th!] \begin{center} \begin{tabular}{lp{5.5in}} Ethical Principle& Summary \\ \hline Consent & Participation should be voluntary, and withdrawal possible at any time. Participants should be informed of this in a way that they can understand. \\ Beneficence & The welfare of participants, and the greater good for society, should be considered. \\ Confidentiality & The privacy and confidentiality of the participants must be protected in order to minimize the impact of the study on their integrity. \\ Scientific Value & Research should yield fruitful results for the good of society, and not be random and unnecessary. \\ Researcher Skill & The researchers should have adequate skills. \\ Justice & It is injust to let one group carry the burden of research while another gets the benefits of research. \\ Respect Law & Relevant laws should be obeyed. \\ Ethics Review & An independent ethics board should comment on, guide and approve studies involving humans. \\ \end{tabular} \caption{Ethical principles in research. \label{tab-eth-princ} } \end{center} \end{table*} \section{Background and Related Work} \label{background-and-related-work} This section covers a chronology of ethical principles in medicine, guidelines in software engineering, anonymization, as well as legislation and institutional review boards. Important ethical principles highlighted in previous and related works are: consent, beneficence, confidentiality, scientific value, researcher skill, justice, respect law, and ethics review (summarized in Table~\ref{tab-eth-princ}). Despite much previous work, how to apply these principles for interviews in software engineering research has remained unexplored. \subsection{Guidelines from Cos to Menlo via Nuremberg and Helsinki} \label{background} Medical research has a long history of guidelines. Some 2500 years ago Hippocrates of Cos wrote the Hippocratic oath \cite{hippo1923}. A core topic is beneficence, often misquoted as ``first, do no harm'' \cite{smith2005}. Following the second world war and the monstrous Nazi experiments on humans, ten ethical research principles constitute the \emph{Nuremberg Code} \cite{nuremberg1947}. The principles highlight that an experiment should aim for positive results for society, participants must consent to and have the right to withdraw from a study, risks should be minimized and not exceed the expected benefits, the experiment must stop if continuation would be dangerous, research should be based on previous knowledge, and that staff must be qualified to conduct the experiment. In the similar declarations of Helsinki \cite{helsinki2014} and later Taipei \cite{taipei2016}, the World Medical Association (WMA) laid out ethical considerations regarding medical research involving human subjects as well as health databases and biobanks. In practice, a way to respond to unethical research is to create ethical guidelines. One example is the Belmont report \cite{belmont1978} that strives to protect human subjects. This report points out principles for research involving humans and the importance of a risk/benefit assessment, where stakeholder identification is prerequisite. It argues that the justice principle has implications for the selection of participants in a study: it is not fair if one group takes the burden of research whereas another receives the benefits. Another response to unethical research is for society to respond with laws governing research, and with new laws there is an increased need for more guidelines. Laws on public records, corporate secrets, public archiving, etc., might, from the researcher perspective, seem to be in conflict with laws covering ethical research. In the Menlo report \cite{menlo2012}, the Belmont report is adapted and built upon for the field of Information and Communication Technology (ICT) and ICT Research (ICTR). It reiterates the core principles from the Belmont report, and adds the additional principle of respect for law and public interest. The Menlo companion \cite{menlo2013}, aims at providing more concrete guidelines for ethical ICTR. Together they argue that ICTR is different from medical research because of the greater distances between researcher and participant, as well as the scale, speed, wide distribution, and opacity of the field. In ICT there is also the potential for collecting many types of data -- from a cell phone one might collect financial and geographical data, as well as emails, dating history, etc. \label{related-work} \subsection{Guidelines in Software Engineering} Three examples of books with guidelines for empirical software engineering research are Kitchenham et al.\ \cite{kitchenham2015}, Runeson et al.\ \cite{runeson2012}, and Shull et al.\ \cite{shull2008}. They all highlight the importance of ethics, however they lack hands-on instructions on how to conduct interviews and how to handle interview artifacts with respect to ethical perspectives. Vinson and Singer (in one of the chapters in Shull et al.) repeat four ethical principles from medicine: informed consent, beneficence, confidentiality, and scientific value \cite{vinson2008}. Kitchenham et al.\ highlight ethical issues for primary studies, in particular with respect to informed participation, pressure to take part, collecting demographic data and reporting \cite{kitchenham2015}. The recent ACM Code of Ethics \cite{acm2018} reiterates a number of the already mentioned guidelines. Despite the enormous scope of the code, a number of the principles could be seen as relevant for interview research in software engineering, in particular \emph{respect privacy}, and \emph{honor confidentiality} that, again, emphasize the importance of protecting the interviewees. \emph{Accept and provide appropriate professional review}, could be seen as encouragement for a review step where the interviewee may comment on the transcript. We are also reminded of the Nuremberg code in: \emph{Perform work only in areas of competence}. \subsection{Anonymization} Researchers have both ethical and legal obligations to respect the confidentiality of individuals, this confidentiality is also essential for maintaining trust \cite{taipei2016}. A good way to mitigate ethical concerns is to anonymize personal data. If a link is needed between participants and personal data, then details for the link could be pseudonymized \cite{eudata2018}. Becker-Kornstaedt describes how interview data could be handled during software process modeling \cite{becker2001}. She describes ethical dilemmas and techniques to protect the interviewees. Mitigations include anonymization of the data and sanitation of data, where certain details are left out, summarized or aggregated. However, sanitized data might be too abstract or unsuitable for the task. She also proposes to be transparent about risks, have a well defined scope such that data collection can be limited and irrelevant data left out. She suggests to make sure that managers are not present during interviews, to avoid giving away raw data such as interview transcripts, and to allow the interviewee to review data with respect to completeness as well as confidentiality. Aldridge et al.\ discuss the problem of data proliferation. They conduct a type of life cycle analysis of an interview, and propose 14 guidelines for data security \cite{aldridge2010}. They recommend early anonymization, because a leak of anonymized data reduces harm when compared to loss of not yet anonymized data. They recommend replacing identifiers such as names, places or organizations with unique identifiers (pseudonyms). Saunders et al.\ \cite{saunders2015b, saunders2015a} cover ethical interviews in the field of medicine, and also propose pseudonymization, e.g.,\ by replacing the role of a mother to that of a sister during the transcription. In our study we interviewed industrial practitioners on the flow of information in software testing \cite{strandberg2019}. We anonymized the transcripts. The most anonymized categories were names of tools or tool jargon, names or details of products and organizations, and extracts related to the domain. \subsection{De-anonymization} De-anonymization, or data re-identification, refers to the practice of uncovering the identify of an individual by using anonymized data. For example, Sweeney showed that 87\% of all Americans could be uniquely identified by their ZIP code, gender, and date of birth \cite{sweeney2000}. Rosenblum et al.\ % conducted a study where they identified source code authors by investigating compiled binaries \cite{rosenblum2011}. Saunders et al.\ describe their experiences from research where they interviewed family members of patients with severe brain injury, vegetative or minimally conscious states, and provide guidelines for ethical interviews \cite{saunders2015b, saunders2015a}. Their interviewees are from a small sample, some participants have been part of court cases or covered in the media, and some are active in social networks such as blogs or support forums. If an interview covers a topic already mentioned in a blog or court case, then de-anonymization may be trivial for a motivated reader if details are also retold in a publication. \subsection{Drawbacks of Anonymization} Nespor investigates anonymization of locations in qualitative research \cite{nespor2000}. He argues that desire to anonymize comes from three assumptions: (i) Identification may cause harm -- an assumption he finds plausible. (ii) Anonymization decreases the likelihood of identification -- an assumption he claims lacks support in research. (iii) Identifying places and settings make participants more easily identifiable -- an assumption he says might be relevant. When reading Nespor it is clear that anonymization is related to generalizability: results from a certain high school, in a certain place at a certain time might be part of a complex interplay with the local community, the history of the school, etc. Results from ``a high school'' might instead imply that the results apply in general for all high schools. Given the importance of anonymization in guidelines and laws, we will not further investigate drawbacks of anonymization in this paper. \begin{figure*}[th!] \begin{center} \includegraphics[width=\textwidth]{./interview-life-cycle2.pdf} \end{center} \caption{ Overview of some of the main interview activities (horizontal text) and artifacts (sloped text). Letters in circles refer to subsections in Section~\ref{interview-life-cycle}. \label{fig-interview-flow} } \end{figure*} \subsection{Legislation and/or Ethics?} The scope of legislation and ethics is enormous, and in this paper we will barely scrape the surface of this topic. In a recent paper by Vardi, he argues that the number of deaths from automobile crashes has not been decreased with ethics training for drivers, but with laws and regulations \cite{vardi2019}. This could imply that we need more laws to govern how researchers handle interview artifacts. One such law is the EU General Data Protection Regulation (GDPR)~\cite{gdpr}. It has almost certainly had an impact on how interview studies are conducted in the EU since its adoption in 2018. According to Schaar, data anonymization, or pseudonymization, are two ways to comply with GDPR \cite{schaar2016}. There may be a conflict between ethical guidelines and law. The declaration of Helsinki argues that no legal requirement should reduce any of the protections for participants. However, the declaration does not take precedence over national law, as was tested in the Gillberg v.\ Sweden trial in the European Court on Human Rights (after research data had been destroyed in order to protect interviewees) \cite{gillbergvssweden}. \subsection{Institutional Review Board} Many of the guidelines discussed in the above sections point to the merit of an institutional review board (IRB\footnote{Alternative terms: research ethics board or independent ethics committee.}). The Menlo report argues that many researchers in the ICT field do not know when they are involved in `human subjects research,' or do not know that this may require involvement of an IRB, so they do not interact with an IRB at all \cite{menlo2012}. An ethics board is a great complement that may aid a researcher into conducting more ethical research. However, a board is no excuse for a researcher to not, on his or her own initiative, strive for ethical research. A board cannot be seen as a catch-all solution for ethical interview studies. Buchanan et al.\ \cite{buchanan2011}, could be read as a starting point on the topic of IRB's in computer science security research. \section{Interview Life Cycle} \label{interview-life-cycle} An interview study involves many artifacts, activities, and stakeholders: the study needs to be planned; the interviewees have to consent; the audio needs to be recorded during an interview in a room somewhere; the audio files have to be archived, transcribed and anonymized; the transcripts have to be analyzed; and finally a publication is written. This might seem like a set of linear activities, but this research is iterative, and can be time consuming -- we recently finalized a study that required more than two years from planning to publication. In the coming sections we: (i) cover the elements of an interview study based on experiences and recommendations from, in particular, Aldridge et~al.\ \cite{aldridge2010}, Becker-Kornstaedt \cite{becker2001}, Carusi and Jirotka \cite{carusi2009}, and our study: Strandberg et~al.\ \cite{strandberg2019}. The interview process is summarized in Figure~\ref{fig-interview-flow}. (ii) We also introduce a running example with a fictional research project on the topic of software quality for embedded systems. It involves four individuals: Alice is a doctoral student, and Bob a post-doc. Their supervisor, Professor Carol, shares her time between a nearby university and a helicopter manufacturer. Professor Carol collaborates with the company contact, Manager Dan. Finally, (iii) the coming sub-sections contain checklists for the main elements of an interview study. \subsection{Planning for Ethical Research} \myexample Alice, Bob, Carol and Dan are planning for a face-to-face semi-structured interview study at the local helicopter company. Alice created a draft interview instrument with ten questions that she emailed to Bob and Carol, who suggested adding three more questions. Manager Dan helped them recruit interviewees. } \noindent \emph{Identify stakeholders:} In order to achieve the key ethical principle of beneficence, we should identify stakeholders prior to our research. Without knowing who the stakeholders are we cannot consider the potential harm or benefits from a research activity \cite{mustajoki2017}. Some obvious stakeholders in software engineering research are: the interviewee, the company or organization at which the interviewee is employed, the researchers conducting the interview, colleagues of the interviewee (e.g., managers), other researchers involved in the study (e.g., supervisors, students recruited for transcription), industrial practitioners that might benefit from the research results, the research community in the field of research, companies whose software is used in the analysis, (e.g., Google, Microsoft, etc.) and IT-administrators. The ethical principle of beneficence involves striving for minimized risk of harm to all stakeholders. Harm may, in addition to physical harm, involve risks for the social standing and status in the family, at work or in the wider community; risks to privacy and emotions; as well as risks of revealing information related to illegal, sexual or deviant behaviour~\cite{carusi2009}. From a company perspective, harm could be to disclose intellectual property or shortcomings in the software development and software quality processes. For a practitioner, harm could be caused by revealing poor performance and non-compliance to processes to managers or other colleagues \cite{becker2001}. During interviews, there might be close bonds and trust between the researchers and the interviewee, and thereby the views of the researchers might be embedded in the data. This could be a form of researcher bias, but might also make the researchers themselves vulnerable to harm \cite{carusi2009}. From a wider perspective, harm could also be to produce invalid research results leading to distrust in certain methods or tools. In a software engineering context, one could imagine distrust of test-driven development or C++ as a result of fraudulent or erroneous research. \emph{Ethical challenges:} A second critical step in the planning of ethical interviews is to recognize that there are ethical challenges \cite{mustajoki2017}. Becker-Kornstaedt identifies ethical challenges in the domain of software process modeling \cite{becker2001}, e.g., managers unexpectedly being present during interviews, processes not being followed, the dilemma of de-anonymizing participants or having to obscure data, as well as dealing with information given ``off the record.'' Other ethical challenges involve getting informed consent from participants in research projects; they might feel forced into participating. A researcher might also be tempted into using poor scientific methods instead of using well-established ones (if the method is flawed ``the results will be invalid so the merit of the study is nil'') \cite{vinson2008}. \emph{Decisions on ethics:} When stakeholders and ethical challenges are identified, a researcher should make decisions on ethics, and strive for following these decisions, even when under pressure \cite{mustajoki2017}. Alice and Bob might decide they want to anonymize the interview data in order to protect the interviewees. One type of pressure comes from the data collection itself: How should they react if an interviewee mentions critical bugs in the control system of a helicopter currently being sold? Another type of pressure comes from the research team: Would it be unethical of Alice, Bob and Carol to recruit students to transcribe for extra credits? \emph{Validate instrument:} In order to strive for a scientific value of the study, researchers should validate the instrument. One way is to do pilot interviews, other options include expert reviews, focus groups, cognitive interviews, and experiments \cite{linaker2015}. Data from pilot interviews are not always used in the final data analysis. \emph{Involve an IRB:} During the planning of research involving humans, a researcher should aim at involving an IRB that could comment on, guide, and/or approve the research project. \emph{Our experiences:} In our study we wrote a research plan following the guidelines by Linåker et al.\ \cite{linaker2015}. We made strong commitments to protect the interviewees and the companies, to anonymize the interviews, and to destoy data after use. We conducted three pilot interviews, after which we did only minor changes to the instrument, and ended up using two of the three pilot interviews for the data analysis. However, we did not carefully identify all stakeholders, we made no harm/benefit analysis, and we did not involve an IRB. Alice and Bob did not identify stakeholders, consider ethical challenges in their research, validate their instrument, and they did not involve an IRB. \mychecklist{Planning for Ethical Research} \begin{enumerate} \item Are stakeholders identified? \item Are ethical challenges considered? \item How will the challenges be addressed? Do sponsors and supervisors agree? \item How will the instrument be validated? \item Has an IRB been consulted? \end{enumerate} \subsection{Pre-Interview Discussions} \myexample{Alice and Bob inform the interviewees about the purpose and topics of the interviews. One interviewee mentions that he does not really want to participate in the study, but he is worried what Carol and Dan might think if he would not participate. After a pep-talk, he gives a really valuable interview. } \noindent Before starting the interview, there must be a discussion with the interviewee on consent and withdrawal. These two principles have been echoed in ethical guidelines since the Nuremberg code. There should also be a discussion on the purpose and topic of the interview, as well as a harm-benefit analysis. Kitchenham et al.\ suggests that interviewees should sign a consent form, and that this might be needed for an ethical approval (signing may be clicking a button on a web page) \cite{kitchenham2015}. Similarly, for research involving interviews that are funded through EU Horizon 2020, informed consent and information sheets are required \cite{eudata2018,euguide2019}. It may be suitable to archive consent forms, as these may later be requested by interviewees, funding agencies or authorities auditing the research quality or data protection. In Appendix E of Runeson et al.\ \cite{runeson2012}, there is an example of a consent information letter. It informs the interviewee about who the researchers are and how to contact them, and it also highlights that participation is voluntary, the interviewee may refuse to answer questions and withdraw from the study at any time. They also inform the interviewee that the interview data will be protected by law (however, this law has since been outdated and replaced). Furthermore, the authors claim that the interview data will be kept confidential and only available to the research team, ``or in case external quality assessment takes place, to assessors under the same confidentiality conditions.'' Researchers should never promise that nobody outside of a research group will ever get access to collected data. However, many researchers promise this out of ignorance or because of a mix-up of important terms \cite{stafstrom2017}. Saunders et al.\ discuss how anonymized interviewees could be de-anonymized if the participants are active in social media, or part of court cases \cite{saunders2015a}. They recommend to discuss this with the interviewees, inform them on how data will be anonymized, and that it might be possible for someone who learns about the interviewee from multiple sources to de-anonymize him or her. The participation must be on the basis that the interviewee understands and accepts this risk. Understanding what an interviewee consents to can be hard. The informed consent should be comprehensive, in a plain language, in the preferred language of the interviewee, and be accompanied with a discussion between the researchers and the interviewees to improve the comprehensibility \cite{badampudi2017}. Conducting research without consent can be motivated under certain circumstances, and an IRB may allow deception when: (i) there is no more than minimal risk to participants, (ii) the research will not adversely affect the rights and welfare of the participants, (iii) the study could not practically be carried out without deception, and (iv) the participants will debriefed after the study% \footnote{In addition to interview studies where interviewees are deceived, we would like to mention two additional research scenarios where informed consent from every participant may be hard or impossible to collect. The first scenario involves a researcher studying a criminal bot-net that has taken control of thousands of smart refrigerators. It is probably not reasonable for the researcher to collect consent from every owner of an infected refrigerator to study the impact of the bot-net (example adapted from \cite{menlo2012}). The second scenario could involve sentiment analysis of bug report discussions in open source projects. This is a common data collection method for software repository mining research, and data could be seen as ``publicly available.'' However, the individuals contributing to open source projects have not given consent to be part of a research project. These scenarios are not easily translated into an interview study, and we will not further cover them in this paper.}% \cite{belmont1978,commonrule}. \emph{Our experiences:} In our study \cite{strandberg2019}, we had a discussion with each interviewee around a written instruction. The topics included were: purpose, duration, sampling, sponsor, confidentiality, contact details, and project leaders, and were inspired by guidelines from Linåker et al.\ \cite{linaker2015}. We gave a printed copy of the text to each participant. Alice and Bob violated the ethical principle of consent by coercing one of the interviewees into participating. \mychecklist{Pre-Interview Discussions} \begin{enumerate}[resume] \item How will informed consent be obtained? \item How will any participant withdrawals be handled? \item Are the interviewees informed about purpose, possible positive outcomes, possible harm, expected duration, sampling, sponsor, confidentiality, contact details, project leaders, etc.? \item What promises, with respect to third party access to interview data, will be made? Is there a plan for a potential research quality audit? \end{enumerate} \subsection{Room} \myexample{Alice and Bob got help from manager Dan to book the best conference room at the company for the interviews. This room has a fancy glass door and is next to the most popular coffee machine at the company. } \noindent The room in which the interview is conducted could cause harm to the stakeholders. For internal anonymity it should not be obvious to colleagues that an interview has taken place, nor what was mentioned. Before leaving the room, researchers should remove notes on whiteboards and collect any papers left. Another ethical risk is when a superior enters the room and wants to listen to the interview \cite{becker2001}. This should be avoided to ensure that the interviewee can speak freely, and to avoid reactive bias. \emph{Our experiences:} For one of our interviews, we had not booked the room long enough, and at this organization the rooms were in short supply, so we had to finish the last part of the interview in a lobby. This could obviously have broken internal anonymity, and our discussions could have been overheard by colleagues. Alice and Bob used a room in plain view to others which might break internal anonymity because the colleagues of an interviewee would know that he or she has been interviewed. \mychecklist{Room} \begin{enumerate}[resume] \item How will internal anonymity be addressed? \item Are managers informed that their participation might have a negative impact on the research? \item Are interview artifacts removed after interviews? \end{enumerate} \subsection{Interview} \myexample{Alice and Bob find it interesting that the helicopter company is a very diverse work place. In order to capture this in the data, they extend the instrument to include questions on ethnicity, political and religious affiliation, sexual orientation and membership in trade unions. } \noindent Before conducting the interviews, in order to adhere to the ethical principle or researcher skill, the researchers should have knowledge of, and skills in, research methods in general, and interview methods in particular \cite{acm2018, nuremberg1947}. The skills of the researcher will have an impact on the quality of the interviews, and researchers should also be qualified in the topic of the interview \cite{eldh2013, hove-anda-2005}. The two ethical principles of consent and scientific value should be considered during the interview. An interviewee who has given consent for research of one purpose has not given consent for another. Researchers should take great care to only collect data that matches the purpose of the research \cite{eudata2018,euguide2019}. Data minimization involves limiting the amount of collected data, reducing the purpose for which it is used, and the period the data is kept. Data minimization is a central topic in GDPR but also relevant outside of the EU \cite{eudata2018}. \emph{Our experiences:} In our study we did not record the pre-interview discussions. This way interviewees are not recorded without knowing how the audio is going to be used, less audio will make transcription faster, and in case of a data leak there would be less information lost. During our interviews we had two researchers present for most of them; one took the role of driving the interview and the other kept track of time and made sure that all questions were asked. The interviews were semi-structured and we gave the interviewees room to explain details or complain on problems. Hove and Anda reported that being two researchers instead of one seems preferable -- more follow-up questions are asked and more data is recorded \cite{hove-anda-2005}. Alice and Bob ask their interviewees about sensitive topics such as sexual orientation and membership in trade unions. This is data out of the scope for their research and they are violating the principle of data minimization. \mychecklist{Interviews} \label{todo-interview} \begin{enumerate}[resume] \item Do the researchers have adequate skills? \item How will data minimization be addressed? \end{enumerate} \subsection{Audio Files} \myexample{During one of the interviews, a participant requested a copy of the audio file to be sent to him. Alice had recorded the audio on her smartphone and sent it as an attachment from her personal email account that was already configured in the phone. } \noindent Aldridge et al.\ report on experiences of having a laptop with sensitive data stolen from the home of a field worker. They suggest to use a central server with encrypted connections in order to decrease the number of copies of the data \cite{aldridge2010}. They recommend to use passwords for log-in on computers, and to not allow a computer to remember passwords. They also recommend encrypting files so that no one could listen to the audio without decrypting it first. Furthermore they recommend to make backups, manage the storage and deletion of data, as well as deleting data permanently when done with it. Similarly, the Menlo report suggests to destroy risky data when the research activities are completed (or terminated), since the data is at risk for as long as it exists \cite{menlo2012}. This is of particular importance for the audio files since they contain data that is not yet anonymized. \emph{Our experiences:} During our interviews we used an off-line digital voice recorder that recorded audio as MP3-files. These were stored on a limited number of computers as well as on a USB-stick in a locked area. Our motivation for using an off-line digital voice recorder instead of a smartphone was out of fear that the smartphone producer would use the data in ways we would not be able to control, and make backups of the audio files in ways which would render data ``undeletable.'' We promised the interviewees that we would delete all the audio files and the links to participants upon publication of the first paper from the study. We took a different approach than Aldridge et al.\ and made sure to \emph{not} store audio files on central servers or in cloud storage, out of fear that audio would be rendered undeletable. We also renamed the files to avoid time stamps in the file names, and we tracked the participant to audio file link on paper only. Alice and Bob recorded the interview on a smartphone and emailed the file from a personal email account. There is thus an obvious risk that the audio will be rendered undeletable due to backups by the phone manufacturer, or the email provider. The spread of data that is not yet anonymized may also cause greater harm than the spread of anonymized data. \mychecklist{Audio Files} \begin{enumerate}[resume] \item What is the data storage plan? \label{data-storage-plan} \item Has the number of people with access to data been limited? \end{enumerate} \subsection{Transcription} \myexample{Alice and Bob divided transcription work among themselves. They transcribed half of the interviews each, and did a round of quality control on the other half. Bob found it very time consuming to transcribe so he recruited two students to do the transcription of the last couple of interviews. } \noindent \emph{Who Transcribes:} The transcription process might seem time consuming. In our study, about a work day was required to transcribe one hour of interview. In addition, we did a second round of listening to the audio for quality control of the transcription and the anonymization. In rough terms we needed 10-15 hours of work to fully transcribe one hour of audio. Hove and Anda reported spending about a work day per hour of audio \cite{hove-anda-2005}. In comparison to the duration of the entire study (more than two years) time for transcription is not a limiting factor. Furthermore, the transcription process brings new insights, and makes the researcher familiar with the data. We therefore recommend for transcription to be done by the researchers themselves, (as do many others, e.g., Runeson et al. \cite{runeson2012}). Despite this, we recruited students to do transcription for us. This forced us to learn about non-disclosure agreements (NDA). With the help of the University, we had to create an NDA, we had to get the students to sign the NDA, and finally we had to archive the signed NDA. By letting students do transcription, we also increased the risk of spreading the raw data. The transcripts we got from the students were also of a lower quality than the ones we transcribed ourselves. One of the reasons for this was lack of familiarity with the domain-specific jargon -- Lethbridge et al.\ had similar experiences \cite{lethbridge2008}. This led us to do fill-in transcription, corrections and additional anonymization. \emph{What to Anonymize:} Surmiak interviewed 42 researchers in the field of sociology and anthropology who, in turn, do research with vulnerable participants (sexual minorities, homeless, war veterans, etc.) in Poland \cite{surmiak2018}. She wanted to know how researchers manage confidentiality. She sent the transcripts to the interviewed researchers for corrections, clarifications and further anonymization. Surmiak found that other researchers were very aware of the risk of being de-anonymized from the transcripts and many wished to review their transcripts -- the researchers wanted to be treated in another way than they, in turn, treat their participants. This might come from the awareness the researcher has, and that they might be less trusting towards other researchers. Vinson and Singer mention three principles of confidentiality: (i) data privacy: limit access to the data, (ii) data anonymity: examination of data should not lead to de-anonymization, and (iii) and anonymity of participation (or internal anonymity): participation is not revealed to colleagues \cite{vinson2008}. Runeson et al., just like Kitchenham et al., suggest that companies and individuals could be de-anonymized with too many details, or a too small sample, \cite{runeson2012, kitchenham2015}. Becker-Kornstaedt suggests to interview more than one person per role, project or department as a possible mitigation \cite{becker2001}. During the transcription we recommend to do anonymization while listening, but before writing; this way sensitive information will never be saved to disk in plain text. Saunders et al.\ suggest anonymizing people's names, places, religious or cultural background, occupation, family relationships, and other potentially identifying information \cite{saunders2015b, saunders2015a}. Surmiak also mentions: occupation, place of work, nationality, religion, hobbies, military rank, gender, zodiac sign, dietary restrictions, and periods of illness \cite{surmiak2018}. In our study \cite{strandberg2019} we anonymized about 90 extracts per interview. The three most anonymized categories were: 313 extracts of tools or tool jargon (e.g., programming language, and version control system), 178 names or details of products and organizations, and 160 extracts related to the domain. Other categories we anonymized were company specific jargon, technical details, names of places and people, numbers or points in time, and off-topic discussions. \emph{How to Anonymize:} There seem to be two major approaches to anonymization. One is to assign pseudonyms. In some cases it may be motivated to assign more than one pseudonym to one interviewee; Saunders et al.\ did this when one extract of a transcript will not identify an interviewee, but when the combination of two might. They also mention approaches where multiple interviewees are assigned to one pseudonym in order to create a more representational story. Several papers highlight that keeping track of pseudonyms can be hard when the number grows -- in software engineering this could for example happen if researchers are discussing a number of subsystems and their interfaces when every subsystem has a pseudonym and a number of people working with them. We anonymized interviews by replacing some words with more general terms within pointy brackets. For example, C++ would be <programming language> and helicopter would be <vehicle>, etc. We listened to the audio in a media player running at low speed and wrote the transcript in plain text files with speech from researchers, and interviewees on separate lines with an initial ``Q'' for questions or comments from the researchers, and ``A'' for answers by the interviewee (Saunders et al.\ instead transcribe with ``Interviewer'' and a pseudonymized name of the interviewee). Pauses were indicated with blank lines followed by a time\-stamp showing the number of minutes and seconds into the recording. This way, a researcher could easily go back to the original recording if a transcript appeared incorrect. Example transcript: \begin{quote} [28:54] \\ Q: The next part is on testing and test results. We've covered some of this perhaps. Err\ldots\ But could you give an example of a typical test case? \\ A: Actually, we should have a look into, into <requirements management tool> to see what it looks like. But I mean, for example a <vehicular mechanism> sequence. \end{quote} \noindent The same answer transcribed in the style of Saunders et al.: \begin{quote} Actually, we should have a look into Req\-Test\-Tracker to see what it looks like. But I mean, for example the safe full stop for maintenance sequence. \end{quote} \noindent Just like we did, Alice and Bob recruited students to do the transcription. We recommend to do transcription within the research team. \mychecklist{Transcription} \begin{enumerate}[resume] \item Who will transcribe the audio? \item How will meta information (such as separation of speakers, timestamps, etc.) be added to the transcripts? \item How will consistent transcription over interviews, and over researchers, be achieved? \item What will be anonymized?\footnote{ Candidates for anonymization are: names of people, places, companies, organizations, tools, and products; domain-specific details such as programming languages, domain-specific terminology, company specific jargon and technical details that are not of relevance to the topic of the interview; numbers and points in time such as birthdays, graduation years, number of years in a work place, number of colleagues, number of subsystems in a product, or number of lines of code in a product; personal details such as religion, cultural background, military rank, hobbies, nationality, occupation, family relationships, gender, zodiac sign, dietary restrictions, periods of illness, etc.; and also off-topic details, such as the pre-interview discussion, or a rant from the interviewee. } \end{enumerate} \subsection{Interviewee Correspondence} \myexample{With the exception of the interviewee that requested an audio file, Alice and Bob never contacted the interviewees again. ``If they were interested in the results of the study, then they could read the paper once it's out,'' they argued. } \noindent By corresponding with the interviewees they can be given the opportunity to review, correct, clarify or expand on the interview. Surmiak gave her interviewees the chance to not only review, but also to rewrite transcripts \cite{surmiak2018}. A review step is recommended by e.g.\ Runeson et al.\ \cite{runeson2012}. \emph{Our experiences:} We gave the interviewees the opportunity to review the transcripts and expected that some of them might have wanted to comment on or clarify something. At the end of the interview, we asked the interviewees if they wanted a copy of the transcript, and if so, in which format they wanted it. For maximum anonymity, we had expected them to want it on physical paper, possibly sent to their home address. However, those that wanted to review the transcript wanted it sent by email, and all except one wanted it to their work address. This, obviously, makes it possible for an IT-department at their companies (and the organization from which it was sent) to read the transcripts and to link them to individuals interviewed. It might also render the transcript undeletable. It is not clear that the interviewees understood this risk, however, we complied in their request to send it by email to them. Alice and Bob did not give their interviewees the possibility to review the transcripts. \mychecklist{Interviewee Correspondence} \begin{enumerate}[resume] \item Will interviewees review transcripts? \label{i-review-t} \item If yes to \ref{i-review-t}, how is correspondence to be conducted? \item If yes to \ref{i-review-t}, will they be given the possibility to delete, correct, clarify and/or expand on the transcripts? \end{enumerate} \subsection{Data Analysis and Thematic Data} \myexample{Some time after the interview, one of the interviewees requested to withdraw from the study. Alice and Bob agreed, and deleted the corresponding audio file and transcript, but the thematic data was kept since it was hard coded in the scripts, already in the spread sheets, and the paper was to be submitted the same week. } \noindent For data analysis in studies involving qualitative data, coding is common. It is suggested for thematic analysis \cite{braun2006}, content analysis \cite{graneheimlundman}, grounded theory \cite{glaserstrauss, stol2016} etc. In coding, some parts of a transcript are ``tagged'' with codes and themes in various hierarchies, making it possible to understand and investigate the data in different ways. Commercial tools for coding are available. If an on-line spread sheet like Google Docs is used in research, the researchers must be aware of the old saying with roots from the 1970's: ``if you are not paying for it, you're not the customer, you're the product being sold'' \cite{payproduct}. Indeed, according to the Google Safety Center, Google will collect ``Docs, Sheets, and Slides you create on Drive'' and use it ``to make Google services more useful for you'' \cite{googlesteal}. What this means in terms of interview anonymity, and data longevity is unclear, but a researcher should take great caution and not store sensitive (i.e., not yet anonymized) information in these types of tools. Aldridge et al.\ suggests to only use `on screen' working methods, and when not doing so, the paper copies should immediately be shredded after use \cite{aldridge2010}. \emph{Our experiences:} Prior to data analysis, we double checked the anonymization of the transcripts. We analyzed the data with thematic analysis in on-line spread sheets, and also shared anonymized data with a cloud storage service. The removal of an interview would have been as easy as removing lines in a spread sheet, and would have had no impact on the scripts analyzing the data. Alice and Bob only partially deleted data once an interviewee wanted to withdraw. Again, they violate the ethical principle of consent, and possibly also the principles of confidentiality and respect for law. \mychecklist{Data Analysis and Thematic Data} \begin{enumerate}[resume] \item Will data analysis (and the potential use of third party tools) be done on anonymized data only? \item Has the end user license agreements for tools been read? \item Is there an inventory of the data (with locations of audio files, transcripts, and processed data)? \end{enumerate} \subsection{Writing and the Publication} \myexample{Alice and Bob want to protect the interviewees and hide the name of the company in the paper, and describe it as ``a Nordic manufacturer of manned helicopters with about 1500 employees.''\footnote{% This is a fictional example, as far as we know there are no manufacturers of manned helicopters with 1500 employees in any of the Nordic countries. } } \noindent One of the drawbacks of using anonymized data is the lack of context. Results from ``an embedded systems company'' may convey a different meaning than results from ``a Nordic helicopter manufacturer recovering from a series of bribery scandals with a next generation helicopter that will make or break the company.'' Reporting on context is important in empirical software engineering \cite{petersen2009context}. However, if the paper contains too many details on context, then the companies involved could be identified. Therefore, when reporting on context one should report only on organizations and interviewees in an aggregated form. Before publishing any paper, third party reviewers might need to revisit or make an audit of the data process. Researchers should therefore have the data in order, and be able to explain the flow from plan to paper. In order to foster trust and communication between industry and academia, and to honor the ethical principle of justice, it is important to give feedback to the participants and the organizations that were part of the study, as well as other industrial practitioners and society as a whole. For the research to better reach these groups, it may be motivated for the researchers to publish papers with an open access license, to return to the companies with presentations or reports in formats other than typical academic papers, or to make video recordings of presentations. If knowledge is not given, then other gains could be considered \cite{eldh2013}. Finally, there are a number of recommendations on what to include in an academic paper: (i) how the interviews were conducted and, if possible, include the questions asked \cite{singer2008}, (ii) ethical aspects, such as how consent was received \cite{badampudi2017}, (iii) context \cite{petersen2009context}, (iv) Runeson et al.\ \cite{runeson2012} propose additional topics such as validity, and, as is common in most academic papers, (v) a section on method in order to let a reader know that the research is sound -- without a valid method the results could be meaningless. \emph{Our experiences:} We reported on context in an aggregated form only: e.g.\ details on company size was kept separate from details on domain, etc. Alice and Bob report on context in a way that may uniquely identify the company. \mychecklist{Writing and the Publication} \begin{enumerate}[resume] \item How will details on the organizations, and other context data, be reported? \item Will reports in different forms, for different audiences, be prepared? \item How will feedback for the participating interviewees and organizations be made? \end{enumerate} \subsection{Archive} \myexample{Professor Carol got a new position at a more prestigious university and left her previous positions. Her new research group will focus on agile practices for embedded systems. In order to kick-start this research she brought the interview transcripts from the helicopter company to the new group where this data will be combined with findings from a literature study to provide new insights. } \noindent Research data is not the private property of the researchers, and should not be treated as such. For both ethical and legal principles, a researcher should consider both archiving data \cite{stafstrom2017}, and destroying data from the archive \cite{menlo2012}. Some of the reasons for archiving data are (i) to support investigations of scientific misconduct, (which was an important topic in the Gillberg v.\ Sweden trial \cite{gillbergvssweden}), (ii) data re-use by the researchers themselves or others, and (iii) if the data is of general importance to society at large, it could have value, in itself, for coming generations \cite{stafstrom2017}. Practical advice on what and how to archive are missing in standard popular software engineering research guidelines such as Kitchenham et al. \cite{kitchenham2015}, Runeson et al.\ \cite{runeson2012}, and Shull et al.\ \cite{shull2008}. Researchers, even in software engineering, should also be aware of the existence of laws regulating archiving. Carusi and Jirotka investigated archiving of qualitative data. They argue that de-anonymization may be trivial, in particular when dealing with body language and facial expression data \cite{carusi2009}. When dealing with data of new types, there is often a lack of ethical guidelines, and there might be a conflict between requirements from funding agencies, the academia and laws. Informed consent may also be impossible when a participant does not understand the media type. Finally, withdrawal might be impossible if data is publicly archived. However, allowing a participant to withdraw is fundamental to ethical research, so researchers planning on sharing qualitative data should both strongly anonymize it, and also ensure that the interviewee fully understands the limitations with respect to withdrawal from the study. \emph{Our experiences:} In our study we stored the data in an internal archive during the study. Upon first publication from the interviews, we destroyed the audio files as well as links between individuals and transcripts. Within ten years from the time of the interviews, we will destroy the remaining data. Carol took interview data from one research group to another. It is unlikely that the interviewees gave their consent for this, and it might also violate the ethical principle of respecting laws. \mychecklist{Archive} \begin{enumerate}[resume] \item If any data is to be publicly archived, how will the implications with respect to de-anonymization and withdrawal from the study be explained to the interviewees? \item What is the data deletion plan? When, how and by whom will the data be deleted? Is it coordinated with the data storage plan (item \ref{data-storage-plan})? \end{enumerate} \section{Summary and Discussion} \label{summary} Research ethics is a vast field that is hard to get an overview of. It is difficult to consider every aspect of stakeholders, activities and artifacts from an ethical perspective. In this paper we have reviewed existing guidelines for ethical research, guidelines for software engineering, and revisited an interview study recently finalized. There is a gap in previous work on how to apply ethical principles for interviews in software engineering research. We have addressed this gap by considering ethical aspects of each step in an interview study, and provide check\-lists for these steps. These check\-lists give researchers a stable platform for a more ethical research project. The check\-lists are based on previous work and our own experiences. Of particular importance are the previous publications by Becker-Kornstaedt, who composed a list of ethical challenges in descriptive software process modeling \cite{becker2001}; Aldridge et al., who listed ways in which an interview is copied, how it proliferates and suggested a number of guidelines to avoid spread of sensitive data \cite{aldridge2010}; Saunders et al., who proposed guidelines for anonymization and experiences from participants active in social media, or in court cases \cite{saunders2015b, saunders2015a}; as well as Surmiak, who interviewed interviewers about confidentiality involving vulnerable interviewees \cite{surmiak2018}. All research has limitations. In this study we have investigated a large number of guidelines with an origin in the field of medicine. There are, of course, an even larger number of guidelines that we have not investigated -- both in medical fields such as psychology, and in fields related to software engineering. It is therefore likely that the check\-lists proposed in this paper are incomplete. However, we would like to see this as a starting point for researchers unsure of the ethics in their interview research, and we encourage other studies to build upon, revise or reject our recommendations. We would welcome future work on how to make an interview study that is compliant with the increasing complexity of laws and regulations, such as how to comply with GDPR, the Helsinki declaration, national or international laws, and at the same time fulfill requirements by sponsors. In software engineering research, the use of an IRB seems immature despite recommendations in guidelines such as the Menlo report. Future work could investigate at which level there are legal requirements on empirical software engineering to use or start using an IRB before doing research involving humans, as well as guidelines on how to get started with an IRB at a university where there is no such board. A third possible field of future work involves the knowledge and competence of the researchers themselves. Researchers in the field of software engineering should follow best practices with respect to research methods. Guidelines, e.g.\ the books by Kitchenham et al. \cite{kitchenham2015}, Runeson et al.\ \cite{runeson2012}, and Shull et al.\ \cite{shull2008} instruct a researcher on how to do research. We would welcome research providing instructions; e.g.\ a check\-list for researchers, supervisors, or reviewers; on when \emph{not to} conduct research, such that we may avoid doing research blindly. \section{Conclusion} \label{conclusion} Despite laws and regulations, research ethics is a hard and unnatural topic for many empirical software engineering researchers. In this paper we have learned from our own experiences and listened to the authority of existing guidelines, in order to distill a comprehensive guide for interview studies. In particular, we suggest how to hands-on anonymize interview data in the transcription process. This gives researchers a platform for a more ethical research project. \section{Acknowledgments} This work was sponsored by Westermo Network Technologies AB, and the Knowledge Foundation through the grants 20150277 (ITS ESS-H) and 20160139 (TestMine). The author would like to thank Adrianna Surmiak, Aida Causevic, Tom Ostrand, Wasif Afzal, Daniel Sundmark, and Elaine Weyuker for valuable discussions during the writing of this paper. \subsubsection*{Check\-list for #1}} \newcommand{\myexample}[1]{\begin{quote}\emph{#1}\end{quote}} \newenvironment{IEEEkeywords} {\vspace{3mm}\noindent \emph{Index Terms} -- } \renewenvironment{abstract} {\section*{Abstract}} \renewcommand{\thesubsection}{\Alph{subsection}} \input{ethics-contents.tex} \end{document}
1812.02592
\section{Introduction} \label{sec:intro} Supervised deep learning methods have exhibited promising results for the task of action recognition from 3D skeletal pose ~\cite{zhang2017view,liu2016spatio,hu2017temporal}. However performance of these methods highly relies on availability of large number of diverse data samples with annotated action labels. From the perspective of unsupervised feature learning, action-label agnostic feature representation can generalize to various novel classes in contrast to the acquired feature representation from a supervised counterpart. Moreover, a generalized representation of temporal pose dynamics can be used as an initial step to facilitate various different tasks like analytics of gymnastics, dance motion data, sports bio-mechanics, 3D film production, motion transfer to humanoid robots etc. Under the umbrella of unsupervised learning methods, a generative approach to model the temporal dynamics of human pose can easily be extended not only for action recognition but also for temporal pose generation and future prediction tasks. Hence, there is a need for an efficient unsupervised pose modeling approach which can serve various different tasks with an improved transferability. In contrast to an end-to-end framework~\cite{taku_motion_synthesis,butepage2017deep} for representation and synthesis of pose dynamics, we propose a novel pose-sequence modeling strategy. Previous works ~\cite{martinez2017human,li2017auto} train a single end to end model for handling two different complex tasks of predicting plausible human pose, along with modeling the temporal dynamics and hence, are not scalable. In contrast to such black-box approaches, we plan to separately model a distribution of plausible human poses and further use that to model the sequential dynamics of an action. We propose a novel generative adversarial network (GAN) with an encoder setup designed specifically to have a good hold on the latent representation $z$, which can produce a given 3D pose, $x$. In a usual GAN setup, one optimizes over the latent representation $z^*$ to generate a reconstructed pose $x^\prime = G(z^*)$ such that $|x - x^\prime|$ is minimized \cite{yeh2016semantic}. Note that this is an iterative optimization process and hence time consuming. To avoid such iterative process in later steps we simultaneously train an encoder $Pose^{enc}$ such that, $z = Pose^{enc}(x)$ can be obtained in a single inference. In the proposed Encoder-GAN (\textit{EnGAN}) setup we simultaneously learn the generator, $Pose^{dec}(z)$ along with the encoder $Pose^{enc}(x)$. This leads us to the question - why not use a simple auto encoder instead of \textit{EnGAN}? One of the major benefit of employing a GAN framework is that, it can learn a continuous latent embedding subspace in contrast to the latent space learned by a simple auto-encoder setup. Hence, \textit{EnGAN} enables us to randomly sample plausible human pose interpolations in the continuous embedding space between two diverse pose representations. This also facilitates better modeling of pose dynamics, represented as a continuous trajectory in the learned embedding space. Detailed training procedure of \textit{EnGAN} is discussed in Section \ref{sec:engan}. Parenthetically, the proposed pose embedding model can also be used in applications related to 3D pose estimation to deliver plausible 3D pose from various types of 2D projection information. In this paper we model skeleton sequence dynamics, representing a particular action as a trajectory in the pose embedding space. For long temporal sequences a single Recurrent Neural Network (RNN) fails to efficiently model both short-term and long-term trajectory variations. However, in the available action recognition datasets, the cues specific to a particular action might have performed for a short time span. Thus, we employ a stacked two layer bidirectional RNN to effectively model diverse pose dynamics by designing a RNN auto-encoder architecture, \textit{PoseRNN}. Couple of recent approaches~\cite{li2017auto,butepage2017deep,taku_motion_synthesis} have also explored such RNN auto-encoder setup but in a fully end-to-end fashion. These approaches provide raw joint coordinates directly to the RNN encoder and consequently the decoder is trained to reconstruct back the raw skeleton joint coordinates. In contrast to such approaches we absolutely disentangle the learning of pose embedding from the learning of temporal pose dynamic in a much efficient manner. Note that, in the proposed \textit{PoseRNN} we feed the sequence of pose embedding feature and following this the decoder is trained to deliver the pose embedding feature instead of the raw joint coordinates. This disentanglement of tasks allows us to train the pose modeling network in a complete unsupervised setup, over a large amount of unannotated human 3D skeleton data. We also explore a hierarchical feature fusion technique to fuse local joint level features and individual limb representations along with the global pose embedding. This helps to effectively address action samples focusing on local joint dynamics such as; playing with a tablet, typing on a keyboard, writing, etc. We also incorporate a direct loss on the predicted skeleton joint with an additional first order gradient based loss to explicitly encourage temporal regularity. Finally we demonstrate effectiveness of the learned trajectory embedding for the task of action recognition in multiple datasets with minimal supervision on annotated samples. \noindent In summary, our main contributions are as follows; \begin{itemize} \vspace{-1mm} \item A novel generative architecture for human pose data (\textit{EnGAN}), which can effectively learn a continuous latent space simultaneously with an encoder setup facilitating one-shot inference. \item A novel method of \textit{hierarchical feature fusion} with loss on the end task of skeleton joint prediction followed by incorporation of \textit{first order temporal regularity} to improve human motion generation, which can facilitate learning of more general motion features. \vspace{-3mm} \item Clear demonstration of effectiveness of both \textit{EnGAN} and the \textit{EnGAN-PoseRNN} combination against available unsupervised approaches. We also demonstrate \textit{state-of-the-art transferability} of the learned representation against other supervisedly and unsupervisedly learned motion embeddings for the task of fine-grained action recognition on SBU interaction dataset. \end{itemize} \section{Related Work} \label{sec:related_work} \subsection{Supervised action recognition} There is a cluster of previous arts on the use of Recurrent Neural Network (RNN) based models for the task of action recognition from sequence of 3D skeleton joint coordinates. Du \etal ~\cite{du2015hierarchical, du2016representation} proposed an hierarchical end-to-end bidirectional RNN architecture inspired from the kinematic tree representation of limb connections. They use LSTM subnetwork to model 5 different body parts viz. two arms, two legs and one trunk and hierarchically combine limb representations in further layers. For efficient learning of part-based dynamics, Shahroudy \etal ~\cite{shahroudy2016ntu} propose to initially split the memory cell of LSTM into part-based sub-cells followed by late fusion of features for action recognition instead of modeling the full body as a whole. Zhu \etal ~\cite{zhu2016co} propose a co-occurrence learning regularization approach by introducing fully connected layer between LSTM network to encourage learning of general connections without using kinematic tree supervision. Liu \etal ~\cite{liu2016spatio} introduced a new gating technique in LSTM to explicitly handle noise and occlusion in input data. They also extended LSTM architecture to work on the spatio-temporal domain to facilitate efficient modeling of joint dependencies. To further utilize the action specific local cues in an explicit fashion, Song \etal ~\cite{song2017end} adopted a spatio-temporal attention mechanism to selectively focus on discriminative joints in a frame along with an additional attention level on representations from different time steps. Hu \etal ~\cite{hu2017temporal} proposed temporal perceptive network (TPNet), where they designed a temporal convolutional subnetwork embedded between the RNN layers to efficiently model short-term pose dynamics. To explicitly model joint dependencies along with temporal pose dynamics, Wang \etal ~\cite{wang2017modeling} proposed a two-stream RNN architecture with different streams for temporal dynamics and spatial configuration. \subsection{Human motion synthesis.} Here we provide an overview of the previous works related to 3D skeleton synthesis or prediction. To learn a manifold of human motion data, Holden \etal ~\cite{holden2015learning} used convolutional autoencoders to learn the prior probability distribution of plausible pose representation. Akhter \etal \cite{akhter2015pose} proposed pose-prior by learning pose-dependent joint angle limits, which can be used to avoid prediction of unambiguous 3D pose. Fragkiadaki \etal ~\cite{fragkiadaki2015recurrent} proposed an Encoder-Recurrent-Decoder (ERD) architecture for jointly learning skeleton embedding along with temporal pose forecasting. Recently Martinez \etal ~\cite{martinez2017human} proposed a sequence-to-sequence architecture with residual connections to model the short-term motion predictions using sampling based loss. In the proposed unsupervised human motion modeling approach we have followed an entirely new direction. The disentanglement of pose modeling with respect to the sequential nature of pose dynamics not only achieves improved pose generation results but also learns a generalized trajectory embedding space, delivering state-of-the-art transferability for action recognition performance. \section{Approach} \label{sec:approach} In this section we describe the proposed pose manifold learning methodology along with the carefully carried out preprocessing steps to make \textit{EnGAN} invariant to translation, view and scale, both at global as well as at skeleton-joint level. The adopted preprocessing steps not only improves learning of efficient pose manifold but also facilitates efficient training of \textit{PoseRNN} encoder-decoder setup by exploiting the disentangled canonical-pose and global position parameters. In further part of this section we elaborate the learning procedure of \textit{PoseRNN} followed by utilization of hierarchical feature fusion relevant for fine-grained action recognition. \subsection{Canonical pose representation} \label{sec:preprocessing} The raw X, Y, Z positions of the skeleton-joints in the world coordinate-system are converted to root relative joint positions, by shifting the origin to the pelvis joint. On each temporal sequence of these Root Relative joint positions, Savitzky–Golay filter is applied for motion smoothing. This is followed by Kinematic Skeleton Fitting, where each joint is represented in Polar Coordinate system with reference to its parent joint, in the kinematic skeleton tree. Scale of each skeleton sample is normalized to ensure fixed length of a particular limb across all pose samples. Further, we define a View Invariant, \textit{Skeleton Coordinate System}, whose coordinate axes (X, Y and Z) are represented in terms of the skeleton-joint positions in the \textit{Root Relative Coordinate System}. A sequential rotational transform is applied about X, Y and Z axes, with the angular changes $\alpha$, $\beta$ and $\gamma$ respectively, on the joint positions in the Root Relative Coordinate System to get the corresponding skeleton in View Invariant \textit{Skeleton Coordinate System}. Now, keeping the location of torso joints fixed, we represent the remaining joints in the coordinate system defined at their respective parent joints, using Global to Local Coordinate Conversion~\cite{akhter2015pose}. This is done to capture the most relevant joint-level local variations, which are agnostic to the changes in the skeleton at global level. We define this final form as \textit{Canonical Pose Representation}. The above mentioned carefully selected preprocessing steps helps to model accurate priors of 3D human pose independent of global variations. Our pose embedding model also follows a hierarchical limb based feature fusion to efficiently model correlation between joints and limbs. This facilitates learning of an embedding space with improved generalization avoiding invalid 3D pose samples. \begin{figure* \begin{center} \includegraphics[width=0.8\linewidth]{images/fig_2_wacv.pdf} \caption{Illustration of EnGAN Pipeline for learning skeleton pose embedding space in a hierarchical way} \label{fig:fig_2} \end{center} \vspace{-3mm} \end{figure*} \subsection{Learning pose embedding framework \textbf{\textit{EnGAN}}} \label{sec:engan} Consider $x_{real} \in X_{real}$ as a sample of canonical pose representation obtained from the above mentioned transformations on the raw 3D skeleton joint coordinates. Considering $p(x_{real})$ as the distribution of canonical pose representation, we plan to learn a latent representation $z_{real} \in Z_{real}$ such that, $z_{real} = Pose^{enc}(x_{real})$. Here, we constraint distribution of latent representation to be in the domain of $[-1, 1]$ along all the 32 dimensions of latent representation $z$. One of major distinguishing factor of \textit{EnGAN} is that, it efficiently learns the backward projection from $X$ to $Z$ simultaneously with the learning of a continuous latent manifold as seen in case of Generative Adversarial Networks (GAN). Moreover Variational Autoencoder (VAE) is also not a suitable choice with regard to the specific requirement of an efficient back projection, i.e. $z = Pose^{enc}(x)$. In a VAE setup, efficient generation capability with continuous latent manifold modeling is achieved by careful balancing of both Kullback-Leiber divergence and reconstruction error term in the final objective function. Also, the encoder network of a VAE architecture predicts the parameters of the distribution $p(z|x)$ and hence introduces uncertainty in prediction of exact $z$ which can produce the given $x$. A straightforward autoencoder model lacks in learning a continuous pose manifold and thus does not support an efficient interpolation or traversal in the learned latent space. To alleviate the above mentioned drawbacks of standard approaches, we propose a novel learning protocol to achieve the specific requirement of minimum reconstruction error simultaneously with an efficient manifold modeling. As shown in right section of Figure \ref{fig:fig_2}, the entire \textit{EnGAN} setup constitutes of 3 different individual networks namely; a) pose encoder $Pose^{enc}$, b) pose decoder $Pose^{dec}$ and c) pose discriminator $Pose^{disc}$. The architectural design is similar across all these three networks, which is inspired from the kinematic tree of limb connections as shown in the left section of Figure \ref{fig:fig_2}. Motivated from the work of Du \etal ~\cite{du2015hierarchical}, we follow a similar hierarchical fusion of limb features for individual pose frames in contrast to their approach of fusing temporal information at different hierarchical levels using bidirectional RNN. The five different body parts viz. two arms, two legs and one trunk are first processed independently and further fused in later hierarchy namely, upper-body and lower-body representations. Towards the end a single representation for the full-body is achieved by fusing both upper and lower body features. The training procedure of \textit{EnGAN} can be broadly divided into three consecutive steps. Motivated from the CycleGAN~\cite{zhu2017unpaired} configuration, we first train a cycle-autoencoder without the adversarial discriminator loss. We sample $x_{real}$ from a canonical-pose transformed dataset whereas $z_{rand}$ is sampled randomly from an uniform distribution in the domain of $[-1, 1]^{32}$. A schematic diagram of variable notations along with the network arrangement is given in Figure \ref{fig:fig_2}(right). While training the cyclic autoencoder, we utilize a sum of reconstruction loss on both $x_{real}$ and $z_{rand}$, i.e. $\mathcal{L}_{recon} = |x_{real}-x_{recon}| + |z_{rand} - z_{recon}|$. This reduces the reconstruction loss more aggressively to a lower value even with a limited 32-dimensional embedding space as compared to the counterpart with adversarial discriminator loss. This improves quality of pose generation but lacks in learning of a continuous embedding space, which can facilitate an efficient interpolation and traversal. Hence, in the second step of the learning protocol we introduce discriminator loss using the discriminator network, $Pose^{disc}$ with a combined objective function as, $ \mathcal{L} = \mathcal{L}_{recon} + \lambda\mathcal{L}_{adv} $. Here, \begin{figure*} \begin{center} \includegraphics[width=0.95\linewidth]{images/fig_3_wacv.pdf} \caption{End-to-end architecture of \textit{PoseRNN} for learning actions as a trajectory in pose manifold, enabling classification over the learned trajectory embedding} \label{fig:fig_rnn} \end{center} \vspace{-4mm} \end{figure*} \vspace{-4mm} \begin{equation*} \begin{split} \mathcal{L}_{adv} = & -\mathbb{E}_{X_{real}}[\log{(Pose^{disc}(X_{real}))}] \\ & -\mathbb{E}_{Z_{rand}}[\log{(1-(Pose^{disc}( Pose^{dec}(Z_{rand} ))))}] \end{split} \end{equation*} However, we introduced $\mathcal{L}_{adv}$ initially with a very less weighting value of $\lambda = \lambda_0$ as compared to the default $\mathcal{L}_{recon}$ loss. In further step, we gradually increase the value of weighting factor to $10*\lambda_0$. Note that, here $\mathcal{L}_{adv}$ is applied on the prediction, $X_{fake} = Pose^{dec}(Z_{rand} )$ against the true canonical pose distribution $p(x_{real})$. \subsection{Learning trajectory embedding \textbf{\textit{PoseRNN}}} \label{sec:posernn} After learning the pose embedding manifold with both forward and backward projection networks (i.e. $Pose^{enc}$ and $Pose^{dec}$ respectively), we model pose dynamics as a trajectory in the embedding space as shown in the left section of Figure \ref{fig:fig_rnn}. An RNN encoder decoder architecture is incorporated to learn a representation which can embed the trajectory information in the previously learned pose manifold. Here, the final hidden representation of the encoder RNN is treated as a \textit{trajectory embedding}. Moreover, focusing on the final goal of action recognition to efficiently model both short-term and long-term pose dynamics, we employ a multi-layer bidirectional LSTM architecture~\cite{graves2005framewise} for both sequence encoder and decoder RNN i.e. $biRNN^{enc}$ and $biRNN^{dec}$ as shown in Figure \ref{fig:fig_rnn}. The concatenated output of both forward and backwards LSTM of first layer is fed as input sequence to the second layer bidirectional LSTM. The final trajectory embedding (or motion embedding) is obtained from the last layer as a function (fully-connected layer) of final hidden state representation of both forward and backward LSTM. Similarly for decoder RNN, we consider a bi-directional setup where initial hidden state of both forward and backwards LSTM is obtained as a function of the encoded trajectory embedding. Following Srivastava \etal~\cite{srivastava2015unsupervised}, for the decoder we consider chaining of previous prediction as input for the next time-step. Note that, while the forward LSTM decodes the trajectory in forwards direction, backward LSTM decodes it in reverse order. The final prediction of $z_{pose}$ sequence is obtained as a function of outputs from both forward and backward LSTM. While training \textit{PoseRNN}, considering the end goal of effective human motion prediction, we impose a direct loss on the predicted ${x}^\prime_{t}$ sequence i.e. ${x^\prime}_{t} = Pose^{dec}({z^\prime_t})$ with frozen parameters of $Pose^{dec}$ network from the \textit{EnGAN} training. Hence, instead of minimizing $\sum_t \vert {z^\prime}_t - z_t \vert$, we minimize the following loss function; $\mathcal{L}^{recon}_{RNN} = \sum_t \vert {x^\prime}_t - x_t \vert$. Additionally to enforce temporal consistency, we define $\delta x_t = \vert x_t - x_{t-1} \vert$ and similarly $\delta {x^\prime}_t$ is defined to formulate a first order smoothness loss as; $\mathcal{L}_{RNN}^{grad} = \sum_t \vert \delta {x^\prime}_t - \delta x_t \vert $. Finally the full \textit{PoseRNN} framework is trained using a combined loss function as, $\mathcal{L}_{RNN} = \mathcal{L}^{recon}_{RNN} + \hat{\lambda} \mathcal{L}_{RNN}^{grad}$. This greatly improves the motion reconstruction quality as compared to using only $\mathcal{L}^{recon}_{RNN}$ as the final loss function. As the canonical pose representation does not contain any global view or translation information, the learned \textit{trajectory embedding} acquired from \textit{PoseRNN} is not enough to classify actions related to relative global position variations e.g. \textit{giving something to other person}, \textit{touch other person's pocket}, \textit{handshaking}, etc. Hence, global position and view information is provided as an additional input along with the output of $Pose^{enc}$ while training the full \textit{PoseRNN} pipeline. The corresponding reconstruction loss in also included in the updated final loss function, $\mathcal{L}_{RNN}$ with similar first order smoothness constraint. To learn improved trajectory representation, which can encourage better action recognition results the \textit{PoseRNN} model should be able to capture local limb dynamics in a much efficient manner along with the global pose variations. Classes like \textit{eating}, \textit{drinking}, \textit{touching neck}, \textit{touching head}, etc. significantly involves local limb and specific skeleton joint dynamics in contrast to full-body pose variations inferred from the learned pose embedding. Furthermore, temporal interaction among the four limbs namely, two arms and two legs can also be leveraged explicitly to improve the modeling of local short-term trajectory dynamics. However recent challenging action recognition datasets also constitutes fine-grained categories related to very local joint dynamics such as \textit{typing}, \textit{writing}, \textit{playing with tablet}, etc. Therefore to address the above mentioned challenges we propose to use features from three different levels of hierarchy to efficiently model the local limb and joint level dynamics in the proposed \textit{PoseRNN} framework. As an input representation we fuse individual limb features acquired from the limb-level hierarchy of the learned $Pose^{enc}$ along with the global pose vector, $z$. Furthermore, the root-relative local joint coordinates are also included in the input representation to the final \textit{PoseRNN} framework. Thus, the $Pose^{enc}$ takes a concatenated feature representation consisting of the following; a) global pose embedding, b) four limb embedding features from $Pose^{enc}$, c) root-relative 3D joint coordinates after scale normalization and, d) parameters related to global positions (i.e translation information from raw hip coordinates and view information - sines and cosines - from Euler angles; $\alpha$, $\beta$ and $\gamma$). \section{Experimental evaluation} \label{sec:experiment} In this section we discuss about experimental evaluations to demonstrate effectiveness of the proposed configuration of \textit{EnGAN} and \textit{PoseRNN}, for 3D skeletal pose modelling and sequential trajectory learning respectively. We have trained the entire temporal pose modelling setup with enough data samples collected from various different datasets captured using Kinect device. \subsection{Datasets and experimental settings} \vspace{-0.5mm} \noindent \textbf{NTU RGB+D Dataset}~\cite{Shahroudy_2016_CVPR} This Kinect captured dataset is currently the largest dataset with RGB+D videos and skeleton data for human action recognition with 60 different action category annotations. The full dataset contains 56000 sample sequences across various diverse fine-grained categories related to daily activities also including different health-related actions. For each frame 25 joint coordinates is provided, which are captured from different camera views with a good diversity in face and body orientations. We have used the available Cross-View (CV) and Cross-Subject (CS) splits for fair evaluation of the proposed unsupervised feature learning framework against previous state-of-the-art methods. The given 60 action classes contains different fine actions related to local joint movements such as typing, writing etc. along with different multi person interaction based categories such as \textit{"giving something to other person"}, \textit{"punching other person"}, \textit{"walking towards a person"}, \textit{"pat on back of other person"}, etc. To demonstrate effectiveness of the learned trajectory embedding representation for the task of action recognition, we train a separate fully connected classification layer on the hidden temporal representations of $biRNN^{enc}$. \vspace{2mm} \noindent \textbf{SBU Kinect Interaction dataset}~\cite{yun2012two}: This Kinect captured dataset is an interaction dataset consisting of 282 sequence samples across 8 different classes. We use the standard subject independent 5-fold splits for evaluation of our unsupervised sequential pose modelling representation against previous state-of-the-art methods. To adapt the proposed \textit{PoseRNN} architecture for multi-person interaction, we first apply the former $biRNN^{enc}$ individually on both the sequence and then the latter classification layer is trained from the concatenated hidden layer activations acquired from the pose sequences of both the individuals. We train two different \textit{PoseRNN} frameworks for both 15 joint and 25 joint skeleton embeddings obtained from the corresponding \textit{EnGAN} training. For fair evaluation of transferability we used samples from PKU-MMD~\cite{liu2017pku} dataset for training both 15 joint and 25 joint \textit{EnGAN} setup. PKU-MMD dataset consists of 1076 video sequences across 51 action categories mostly in line with the categories of NTU RGB+D dataset. This ensures availability of enough diversity in individual pose samples and skeleton sequences for efficient modeling of local joints to global full body variations. \begin{table}[b] \begin{center} \vspace{-2mm} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|c|c|c|c|} \hline Training Scheme & $\mathcal{L}_{x_{recon}}$ & $\mathcal{L}_{z_{recon}}$ & $\mathcal{L}_{recon}$ & Acc \\ \hline\hline GAN ($\mathcal{L}_{recon}$ + $\mathcal{L}_{adv}$) & 0.130 & \textbf{0.049} & 0.179 & 64.3\%\\ VAE ($\mathcal{L}_{x_{recon}}$ + $\mathcal{L}_{KLD}$) & 0.148 & 0.179 & 0.327 & 73.4\% \\ Auto Encoder & \textbf{0.051} & 0.245 & 0.297 & 87.3\% \\ Auto Encoder ($\mathcal{L}_{recon}$) + $\mathcal{L}_{adv}$ & 0.109 & 0.092 & 0.201 & 68.2\% \\ Proposed \textit{EnGAN} & 0.070 & 0.079 & \textbf{0.149} & \textbf{58.1\%} \\ \hline \end{tabular} } \end{center} \caption{Comparison of various training setups for \textit{EnGAN} on PKU-MMD Dataset over reconstruction capability and discriminability of Critic Network. (Lower is better) } \vspace{-2mm} \label{table:table_1} \end{table} \subsection{Ablation study} \label{subsec:ablation} \noindent \textbf{Effectiveness of \textit{EnGAN} framework:} We have performed experiments with different autoencoder setups. We also train a critic network, different from the discriminator network, to distinguish the samples generated by these setups, from the real ones. To measure the discriminability of the critic network for a given setup, we evaluate the accuracy of critic network on the task of classifying the generated samples as fake ones. Note that here pose-samples are generated by sampling a random $z \sim P(z)$ (In this case, $U(-1, 1)$) and project it to x = $Pose^{enc}(z)$. Table \ref{table:table_1} shows the reconstruction errors and discriminability on a held-out set of diverse sample pose frames for different training protocols. \begin{table}[b] \vspace{-1mm} \begin{center} \begin{tabular}{|l|c|c|} \hline Input Features & NTU & PKU \\ \hline\hline $(P_{J})$ & 66.1 \% & 75.0 \% \\ $(E_{P})$ & 71.3 \% & 77.3 \% \\ $(P_{J})$ + $(E_{P})$ & 74.8 \% & 82.4\% \\ \textbf{$(P_{J})$ + $(E_{P})$ + $(E_{L})$} & \textbf{78.7} \% & \textbf{85.9} \% \\ \hline \end{tabular} \end{center} \caption{Comparison of feature fusion techniques. $(P_{J})$-Joint Positions $(E_{P})$-Pose Embeddings $(E_{L})$-Limb Embeddings } \label{table:table_2} \end{table} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.96\textwidth]{images/fig_trag.pdf} \caption{Illustrations of the pose manifold trajectories of action sequences - Ground Truth (Blue) and Reconstructed (Red) by \textit{PoseRNN}, projected to 2D via PCA. The top 2 illustrations for sequence length 30, represents trajectories with highest reconstruction losses.} \label{fig:fig_trajectory} \vspace{-5mm} \end{center} \end{figure*} First we trained the complete \textit{EnGAN} setup with both $\mathcal{L}_{recon}$ and $\mathcal{L}_{adv}$ from scratch. As it is clear from the metrics in Table \ref{table:table_1} that although the first setup achieves better performance as compared to a simple autoencoder with only $\mathcal{L}_{recon}$ loss, it performs quite sub-optimally for skeletal reconstruction, as measured by $\mathcal{L}_{x_{recon}} = |x_{real}-x_{recon}|$. In contrast to that, a plain autoencoder setup, even though performing extremely well on just skeleton reconstruction, shows expectedly suboptimal performance on learning a continuous pose-manifold, as measured by $\mathcal{L}_{z_{recon}} = |z_{rand} - z_{recon}|$ and critic accuracy. We report results on the proposed training scheme (\textit{EnGAN}) of gradually increasing the weighting factor $\lambda$ associated with $\mathcal{L}_{adv}$ in the combined loss function $\mathcal{L}$. The proposed training protocol also improves reconstruction loss (Figure \ref{fig:fig_quality} a) even better than the autoencoder setup by enabling learning of a continuous pose-manifold (Figure \ref{fig:fig_quality} b) along with an efficient one-shot transformation from skeletal-pose space $X$ to the latent space $Z$. \vspace{2mm} \noindent \textbf{Effectiveness of trajectory embedding learned using \textit{PoseRNN}:} We evaluated multiple \textit{PoseRNN} setups to obtain the best trajectory embedding representation, which can reproduce the pose sequence with minimum reconstruction loss. Effectiveness of different input representations ($P_{J}$: Joint Positions, $E_{P}$: Pose Embeddings, $E_{L}$: Limb Embeddings) to the proposed \textit{PoseRNN} setup is demonstrated in Table \ref{table:table_2}. Use of only joint coordinates $(P_{J})$ without incorporating \textit{EnGAN} into the \textit{PoseRNN} framework delivers suboptimal performance. This validates the effectiveness of disentangled learning of pose and trajectory embedding for both pose synthesis and unsupervised feature learning task. It is also observed that due to the disentanglement of the pose learning from sequence learning task, reconstructions of the pose embedding sequences by the proposed \textit{PoseRNN} network scales to number of frames as high as 120, which is highly unlikely for end-to-end reconstruction framework ~\cite{li2017auto,taku_motion_synthesis} (show comparatively higher losses for reconstruction). Pose manifold trajectories of action sequences (blue - ground truth, red - reconstructed) are illustrated in Figure \ref{fig:fig_trajectory}. It is observed that the model is able to reconstruct and generate pose trajectories fairly well for sequences as long as 120 frames, and hence efficiently captures inherent features of an action sequence. \begin{table}[b]\small \vspace{-2mm} \begin{center} \begin{tabular}{|l|c|c|} \hline Methods & Average $L_{x_{recon}}(t)$ \\ \hline\hline \textit{PoseRNN}(baseline) & 0.458 \\ {Holden~\etal~\cite{holden2015learning}} & 0.402 \\ \textit{EnGAN-PoseRNN} & \textbf{0.342} \\ \hline \end{tabular} \end{center} \caption{Comparisons of time average reconstruction loss on the prediction of final skeleton pose on NTU dataset (120 frames).} \label{table:table_5a} \end{table} \subsection{Comparison with existing approaches} \label{subsec:ablation} \noindent \textbf{Effectiveness of \textit{ENGAN-PoseRNN}:} The proposed motion modeling framework \textit{EnGAN-PoseRNN} is compared against a baseline, \textit{PoseRNN}(baseline) taking raw canonical skeleton joint coordinates by avoiding learning of pose embedding (\textit{ENGAN}). We also include comparison against the convolutional auto-encoder proposed by Holden \etal~\cite{holden2015learning}, by training it on the extracted canonical skeleton joint locations from NTU RGB+D dataset with input sequence length and dimensions similar to \textit{EnGAN-PoseRNN} setup. We follow the exact architecture, and regularized loss function as proposed in ~\cite{holden2015learning}. Table \ref{table:table_5a} holds the average reconstruction loss of predicted poses over 120 frames on the test set. It clearly demonstrates superior expressibility of the proposed \textit{EnGAN-PoseRNN} model in encoding the entire motion sequence in a single trajectory embedding vector. \begin{table}[b!]\small \begin{center} \vspace{-2mm} \begin{tabular}{|l|c|c|c|} \hline Methods & \begin{tabular}[c]{@{}c@{}}Feature \\ supervision\end{tabular} & CS & CV \\ \hline\hline ST-LSTM~\cite{liu2016spatio} & Full & 69.2 & 77.7 \\ STA-LSTM~\cite{song2017end} & Full & 73.4 & 81.2 \\ TS-LSTM~\cite{Lee_2017_ICCV} & Full & 75.9 & 82.5 \\ GCA-LSTM~\cite{liu2017global} & Full & 74.4 & 82.8 \\ URNN-2L-T~\cite{li2017adaptive} & Full & 74.6 & 83.2 \\ TPNet~\cite{hu2017temporal} & Full & 75.3 & 84.0 \\ VA-LSTM~\cite{zhang2017view} & Full & \textbf{79.4} & \textbf{87.6} \\ \hline \hline \textit{VAE-PoseRNN} & Unsup. & 56.4 & 63.8 \\ {} & Semi & 61.2 & 69.8 \\ \hline \textit{PoseRNN}(baseline) & Unsup. & 59.8 & 69.0 \\ {} & Semi & 69.7 & 77.9 \\ \hline {Holden~\etal~\cite{holden2015learning}} & Unsup. & 61.2 & 70.2 \\ {} & Semi & 72.9 & 81.1 \\ \hline \textit{EnGAN-PoseRNN} & Unsup. & 68.6 & 77.8 \\ {} & Semi & \textbf{78.7} & \textbf{86.5} \\ \hline \end{tabular} \end{center} \caption{Comparisons on the NTU dataset, given feature supervision level, for standard Cross-Subject and Cross-View settings} \vspace{-3mm} \label{table:table_5} \end{table} \vspace{2mm} \noindent \textbf{Effectiveness of motion embedding for action recognition on NTU:} Following the literature of unsupervised feature learning ~\cite{pathakCVPR17learning,pathakCVPR16context}, we compose two different settings viz. a) Unsupervised and b) Semi-supervised. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{images/fig_5_wacv_.pdf} \caption{Illustrations of the pose reconstructions quality (left) and Grid Interpolation of Skeleton Poses (right) generated from a continuous pose-embedding manifold, by \textit{EnGAN}. Note that, in the reconstruction results on the left side of the figure, each pair of skeleton represents the ground truth skeleton (left) and reconstructed skeleton (right), respectively.} \label{fig:fig_quality} \vspace{-5mm} \end{center} \end{figure*} In unsupervised setting, a single fully-connected classification layer (considering linear classifier) is trained on the final trajectory embedding vector, i.e. the output of $biRNN^{enc}$. To effectively handle interaction based action categories, the classification layer is trained on the concatenated trajectory embedding from two different motion sequences. For single subject based action categories, we repeat the same motion sequence two times before feeding it to the final classification layer. Moreover, we train a single classification layer for both interaction and non-interaction based action categories. We use only 40\% of the labelled training samples as it is enough to train a single fully-connected network as compared to the full 100\% used by other fully supervised methods. Note that, while training in this setting, the parameters of the underlying $Pose^{enc}$ followed by $biRNN^{enc}$ is kept frozen to evaluate discriminability of the unsupervisely learned motion embedding. In semi-supervised setting, we allow parameters of $biRNN^{enc}$ (i.e. fine-tuning) to update along with the previously introduced classification layer (initialized from unsupervised setting) on 40\% of the labelled training samples. Here, semi-supervised refers to use of semi-supervisedly learned motion embedding in contrast to the previous unsupervised setting. Table \ref{table:table_5} holds comparison of \textit{EnGAN-PoseRNN} on both unsupervised and semi-supervised setting against the same settings of the convolutional autoencoder proposed by Holden \etal~\cite{holden2015learning}. Results clearly highlight superiority of the proposed \textit{EnGAN-PoseRNN} against the unsupervised framework proposed by Holden \etal. We also report accuracies of other fully supervised approaches on both cross-subject (CS) and cross-view (CV) setup to demonstrate competitive state-of-the-art performance. \vspace{2mm} \noindent \textbf{Effectiveness of transferability on SBU} Following the motivation regarding the need of an unsupervised feature learning framework, we plan to demonstrate transferability (or generalixzability) of the learned trajectory embedding feature from NTU to SBU dataset. Instead of training \textit{PoseRNN} on SBU dataset, we plan to evaluate discriminability of the learned embedding trained from NTU dataset. For fair comparison, we first train \textit{TS-LSTM}~\cite{Lee_2017_ICCV} and \textit{ST-LSTM}~\cite{liu2016spatio} (using the publicly available code) with full supervision on NTU dataset. Then we discard the final NTU classification layer and trained a newly introduced SBU classification layer on 40\% of labelled SBU training set. Following the previous unsupervised and semi-supervised setting, we introduced two different settings named a) unsupervised transfer, and b) semi-supervised transfer. In unsupervised transfer, only the last classification layer is trained using 40\% of the labelled SBU train set. Whereas, in semi-supervised transfer, the full network is fine-tuned after initialization from the unsupervised transfer framework. Table \ref{table:table_6} shows classification accuracy on unsupervised and semi-supervised transfer setting, for both supervisedly learned motion feature (\textit{TS-LSTM} and \textit{ST-LSTM}) and unsupervisedly learned motion feature (Holden \etal and \textit{EnGAN-PoseRNN}). This clearly demonstrates generalizability of learned embedding from the proposed unsupervised learning framework. \begin{table}[t]\small \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline Methods & \begin{tabular}[c]{@{}c@{}}Unsupervised \\ transfer\end{tabular} & \begin{tabular}[c]{@{}c@{}}Semi-supervised \\ transfer\end{tabular} \\ \hline\hline ST-LSTM~\cite{liu2016spatio} & 73.1 & 87.1 \\ TS-LSTM~\cite{Lee_2017_ICCV} & 72.7 & 86.5 \\ \hline\hline \textit{PoseRNN}(baseline) & 73.1 & 81.8 \\ {Holden~\etal~\cite{holden2015learning}} & 73.4 & 82.6 \\ \textit{EnGAN-PoseRNN} & \textbf{77.0} & \textbf{88.2} \\ \hline \end{tabular} \end{center} \caption{Comparison of transfer ability (NTU to SBU) of unsupervisedly learned motion feature against the supervised counterpart. Note that fine-tuning in semi-supervised setting is performed according to the performance on a held-out validation set.} \vspace{-3mm} \label{table:table_6} \end{table} \section{Conclusion} In this paper, we have proposed a generative model for unsupervised feature representation learning of 3D Human Skeleton poses and action sequences. Experimental results and visualizations show qualitative strengths of the proposed framework for temporal pose generation. We demonstrate effectiveness and transferability of the learned trajectory embedding by empirical evaluation for the same task of fine-grained action recognition on multiple datasets, considering interaction based categories as well. Performance on standard action recognition datasets for both unsupervised and semi-supervised setup are competitive with fully supervised state-of-the-art methods. \vspace{2mm} \noindent \textbf{Acknowledgements} This work was supported by a CSIR Fellowship (Jogendra), and a project grant from Robert Bosch Centre for Cyber-Physical Systems, IISc. {\small \bibliographystyle{ieee}
1311.4341
\section{The Taming of the Shrew}\label{section1} \sectionquote{wherein the problem to be solved is introduced that looks simple but defies solution for years.} In the theory of strong interactions, quantum chromodynamics, one can dream of finding the wave functional describing its ground state (vacuum) in the Schr\"odinger representation: \twolineequation{VWF} {\fbox{$\Psi_0\left[u^i_A(x),d^i_A(x),s^i_A(x),c^i_A(x),b^i_A(x),t^i_A(x);A^a_\mu(x)\right]$}\quad\phantom{A}} {} {A=1,2,3,4;\quad i=1,2,3;\quad a=1,2,\dots,8; \quad\mu=0,1,2,3;} that should encompass colour confinement, chiral symmetry breaking, and other observed phenomena. Even if one forgets about the subtleties of how to make such an object a mathematically well-defined entity, the problem still looks very difficult, if not utterly hopeless: in our world with six flavours of quarks with three colours, each represented by a Dirac spinor of four components, and with eight four-vector gluons, the vacuum wave functional depends on \textit{104 fields at each point of space} (not taking gauge invariance into account). Still, one can simplify QCD considerably in many ways, hoping that the amputee will share (some of) the most important features with the full theory \cite{Feynman:1981ss}. One can omit quarks; use two colours instead of three (\textit{i.e.\/} reduce the gauge group from SU(3) to SU(2)); discretize space and time (go to the lattice formulation); and eventually investigate the problem in lower-dimensional spacetime. Cut to the bone, in (3+1)-dimensional SU(2) Yang--Mills theory the problem is to find the lowest-energy eigenstate of the temporal-gauge hamiltonian satisfying \onelineequation{SchR}{\int d^3x\left(-\oh\frac{\delta^2}{\delta A^a_k(x)^2}+\of F^a_{ij}(x)^2\right)\Psi_0[A]=E_0\Psi_0[A] together with the Gau\ss-law constraint \onelineequation{Gauss}{\left(\delta^{ac}\partial_k+g\varepsilon^{abc} A^b_k(x)\right)\frac{\delta\Psi_0[A]}{\delta A^c_k(x)}=0. Albeit simply looking, attempts to solve the equation can claim at most only partial successes. There are, however, a few things that are known about the solution already for decades: 1.~If we set $g\to 0$, the Schr\"odinger equation reduces to that of (3 copies of) electrodynamics and the solution is well-known: \onelineequation{g0}{\Psi_0[A]\ \ {\stackrel{{g=0}}{=}}\ \ {\cal{N}}\exp\left[-\frac{1}{4}{\displaystyle\int} d^3x\;d^3y\; F^a_{ij}(x)\;{\displaystyle \left(\frac{\delta^{ab}}{\sqrt{-\nabla^2}}\right)_{xy}}F^b_{ij}(y)\right].} 2.~The ground state must be gauge-invariant. The simplest form one can imagine that reduces to Eq.~(\ref{g0}) in the free-field limit is \onelineequation{GIVWF}{\Psi_0[A]\; =\; {\cal{N}}\exp\left[-\frac{1}{4}{\displaystyle\int} d^dx\;d^dy\; F^a_{ij}(x)\;{\displaystyle {{\cal{K}}^{ab}_{xy}}[-{\cal{D}}^2]}\;F^b_{ij}(y)\right]} with some kernel ${\cal{K}}$ depending on ${\cal{D}}^2$ (the covariant laplacian in the colour adjoint representation), and fulfilling \onelineequation{limitK}{\lim_{g\to0}{\cal{K}}^{ab}_{xy}[-{\cal{D}}^2]=\left(\frac{\delta^{ab}}{\sqrt{-\nabla^2}}\right)_{xy}.} In fact, all proposals of the VWF that will be confronted with numerical data in this paper are of the above form. 3.~It was suggested \cite{Greensite:1979yn,Halpern:1978ik,Kawamura:1996us} that for sufficiently long-wavelength, slowly varying gauge fields the VWF has the following, so called \textit{dimensional-reduction} form: \onelineequation{DR}{\Psi_0[A]=\cN\exp\left(-\oh\mu\int d^3x\;\mbox{Tr}[F^2_{ij}(x)]\right)\quad\dots\quad\underline{\mbox{DR}} This form, \textit{a.k.a.\/}\ the magnetically disordered vacuum, leads incorrectly \textit{e.g.\/}\ to exact Casimir scaling of potentials between coloured sources, so it cannot be valid for \textit{arbitrary} gauge fields. The problem of finding the Yang--Mills VWF has been addressed by various techniques \footnote{See \textit{e.g.\/}\ Sec.\ II of Ref.~\cite{Greensite:2011pj} and references therein, for the most recent work consult Ref.~\cite{Krug:2013yq}.} Some proposals for the VWF will be reviewed in Sec.~\ref{section2}. Then I will present (Sec.~\ref{section3}) a method for computing relative weights of various gauge-field configurations in numerical simulations of the Yang--Mills theory in the lattice formulation. Some results will be presented in Sec.~\ref{section4}. Sec.~\ref{section5} summarizes pluses and minuses of the present approach. \section{As You Like It (or As We Like It)}\label{section2} \sectionquote{which introduces some popular Ans\"atze and provides some justification for one that we like most.} Head-on attempts to solve Eqs.~(\ref{SchR}) and (\ref{Gauss}), \textit{e.g.\/}\ by weak-coupling expansion in powers of~$g$, quickly run into complicated intractable expressions (see \cite{Krug:2013yq}). Some approaches tried instead to bridge the gap between the free-field limit (\ref{g0}) and the dimensional-reduction form of Eq.~(\ref{DR}) by educated guesses of the interpolating approximate vacuum wave functional. Almost 20 years ago, Samuel~\cite{Samuel:1996bt} proposed a simple expression of the type (\ref{GIVWF}) \onelineequation{Samuel}{\Psi_0[A]\; {\stackrel{}{=}}\; {\cal{N}}\exp\left[-\of{\displaystyle\int} d^3x\;d^3y\; F^a_{ij}(x)\;{\displaystyle \left(\frac{1}{\sqrt{-{\cal{D}}^2+m^2_0}}\right)^{ab}_{xy}}F^b_{ij}(y)\right]} and estimated with its use the $0^{++}$ glueball mass. However, there may be a problem with this Ansatz: the operator $(-{\cal{D}}^2)$ has a positive definite spectrum, finite with a lattice regularization, and lattice simulations indicate that its lowest eigenvalue $\lambda_0$ tends to infinity for typical configurations in the continuum limit. This is illustrated in Fig.~\ref{l0color}. \onefigure{l0color}{0.5\textwidth}{$\lambda_0$ vs.\ $\beta$ from simulations of SU(2) lattice gauge theory in $(2+1)$ dimensions at various couplings and lattice volumes. The best fit to data is $\lambda_0\propto \beta^{-1.4}$, which differs from the expected $\beta^{-2}$ dependence and indicates that $\lambda_0$ diverges in the continuum limit.}{t!} We therefore proposed to subtract from $(-{\cal{D}}^2)$ its lowest eigenvalue, resulting in the approximate VWF \cite{Greensite:2007ij}: \onelineequation{GO}{\Psi_0[A]=\cN\exp\left[-\of\int d^3x\;d^3y\;F^a_{ij}(x)\left(\frac{1}{\sqrt{-{\cal{D}}^2[A]-\lambda_0+m^2}}\right)^{ab}_{xy} F^b_{ij}(y) \right]\quad\dots\quad\underline{\mbox{GO}} with $m$ being a free (mass) parameter. This expression is assumed to be regularized by a lattice cut-off, and we use the simplest discretized form of $(-{\cal{D}}^2)$: \onelineequatio {}{\left({-{\cal D}^2}\right)^{ab}_{xy}= \displaystyle\sum_{k=1}^3 \left[2\delta^{ab}\delta_{xy}- {\cal U}^{ab}_k(x)\delta_{y,x+\hat{k}}-{\cal U}^{\dagger ba}_k(x-\hat{k})\delta_{y,x-\hat{k}}\right],} where ${{\cal U}^{ab}_k(x)=\frac{1}{2}\mbox{Tr}\left[\sigma^a U_k(x) \sigma^b U^\dagger_k(x)\right]}$, and $U_k(x)$ are the usual link matrices in the fundamental representation. An expression analogous to Eq.~(\ref{GO}) in $(2+1)$ dimensions was demonstrated to be a fairly good approximation to the true ground state of the theory by: -- analytic arguments \cite{Greensite:2007ij}, -- direct computation of some physical quantities in ensembles of true Monte Carlo configurations and those distributed according to the square of the GO VWF \cite{Greensite:2007ij,Greensite:2010tm}, and -- consistency of measured probabilities of test configurations with expectations based on the proposed VWF~\cite{Greensite:2011pj}. The most sophisticated attempt to compute the VWF analytically in $(2+1)$ dimensions was undertaken by Karabali, Kim, and Nair~\cite{Karabali:1998yq}. They reformulated the theory with help of new gauge-invariant variables, and solved the Yang--Mills Schr\"odinger equation approximately for the VWF in their terms. They argue that, when expressed back in the old variables, this VWF assumes the form: \onelineequation{KKNngi}{\Psi_0[A]=\cN\exp\left[-\oh\int d^2x\;d^2y\;B^a(x)\left(\frac{1}{\sqrt{-\nabla^2+m^2}+m}\right)_{xy} B^b(y) \right],} This is by itself \textit{not} gauge-invariant, but can be made such along the lines of Eqs.~(\ref{GIVWF}) and (\ref{GO}) by replacing the ordinary laplacian by the covariant laplacian in the adjoint representation, with a~$\lambda_0$~subtraction: \onelineequation{KKN3}{\Psi_0[A]=\cN\exp\left[-\of\int d^3x\;d^3y\;F^a_{ij}(x)\left(\frac{1}{\sqrt{-{\cal{D}}^2[A]-\lambda_0+m^2}+m}\right)^{ab}_{xy} F^b_{ij}(y) \right]\quad\dots\quad\underline{\mbox{KKN}}} Such an expression, however, has never been proposed by the authors of Ref.~\cite{Karabali:1998yq} in their papers, and represents only yet another interpolating VWF of the type~(\ref{GIVWF}) that can be confronted with our numerical data. \section{Measure for Measure}\label{section3} \sectionquote{wherein is shown how one can measure \textbf{\textit{``nothing''}} and learn from it \textbf{\textit{something}}.} The squared VWF could, at least in principle, be computed on a lattice by evaluating the path integral (written below only symbolically, with $\delta_\mathrm{t.g.f.}$ imposing the temporal gauge): \onelineequation{PI}{\Psi^2_0[U']=\frac{1}{Z}\int [DU]\;\delta_\mathrm{t.g.f.}\;\prod_{\mathbf{x},i}\delta[U_i(\mathbf{x},0)-U'(\mathbf{x})]e^{-S[U]}. \noindent An integral of this type is, however, difficult to estimate numerically, because of the $\delta$-functions. The method that enables one to compute -- simply and directly -- ratios $\Psi^2[U^{(n)}]/\Psi^2[U^{(m)}]$ for some test configurations was proposed by Greensite and Iwasaki \cite{Greensite:1989aa}. Their \textit{{relative-weight method}} consists of the following: Take a finite set of gauge-field configurations ${\mathcal{U}}=\lbrace U_i^{(j)}(\mathbf{x}),j=1,2,\dots,M\rbrace$ (assuming they lie near to each other in the configuration space). One puts \textit{e.g.\/}\ the $j=1$ configuration on the $t=0$ plane, and runs Monte Carlo simulations with the usual update algorithm (\textit{e.g.\/}\ heat-bath) for all spacelike links at $t\ne0$ and for timelike links. The spacelike links at $t=0$ are, after a certain number of sweeps, updated all at once selecting one configuration from the set $\mathcal{U}$ at random and accepting/rejecting it via the Metropolis algorithm. Then\ \onelineequation{ratio}{\frac{\Psi^2[U^{(n)}]}{\Psi^2[U^{(m)}]}=\lim_{N_\mathrm{tot}\to\infty}\frac{N_n}{N_m} =\lim_{N_\mathrm{tot}\to\infty}\frac{N_n/N_\mathrm{tot}}{N_m/N_\mathrm{tot}}, where $N_n$ ($N_m$) is the number of times the $n$-th ($m$-th) configuration is accepted and $N_\mathrm{tot}$ is the total number of updates. The VWF can always be written in the form \onelineequation{}{\Psi^2[U]={\mathcal{N}}e^{-R[U]}. According to Eq.~(\ref{ratio}), the measured values of $-\log(N_n/N_\mathrm{tot})$ should fall on a straight line with unit slope as functions of $R[U^{(n)}]$, see Fig.~\ref{prob_k_2_2_l20_c_prob_k_2_5_l20_c} for examples. \twofigures{prob_k_2_2_l20_c}{prob_k_2_5_l20_c}{0.48\textwidth}{$-\log(N_n/N_\mathrm{tot})$ (shifted by constant) vs.\ $R_n=\mu\kappa n$ for ${\mathcal{U}}_\mathrm{NAC}$ [\textit{cf.\/}\ Eq.~(\protect\ref{NAC}) below] with $\kappa=0.14$, on $20^4$ lattice. The values of $\mu$ come out to be $4.06 (4)$ and $1.60 (2)$ for $\beta=2.2$ and $2.5$, respectively.}{b!} We have performed numerical simulations using the relative-weight method for two kinds of simple gauge-field configurations. \medskip \textbf{\textit{1.~Non-abelian constant configurations\/}:} \onelineequation{NAC}{{\cal U}_\mathrm{NAC}=\left\{U_k^{(n)}(x)=\sqrt{1-\left(a^{(n)}\right)^2}\mathbf{1}+ia^{(n)}\bm{\sigma}_k\right\},} where \onelineequation{NAC_a}{a^{(n)}=\left(\frac{\kappa}{6L^3}n\right)^{1/4},\qquad n=1,2,\dots, 10. For NAC configurations one expects: \onelineequation{NACfit}{-\log(N^{(n)}/N_\mathrm{tot})=R^{(n)}+ \mbox{const.}=\kappa n\times{\mu}+ \mbox{const.}} The constant $\kappa$, regulating amplitudes of these configurations, is chosen so that the ratio $R^{(10)}/R^{(1)}$ is not too small, ${\cal{O}}(10^{-4}\div10^{-3})$, otherwise the Metropolis updates would hardly accept configurations with higher $n$. \medskip \textbf{\textit{2.\ Abelian plane-wave configurations}:} \onelineequation{APW}{{\cal U}_\mathrm{APW}=\left\{U_1^{(j)}(x)=\sqrt{1-\left(a^{(j)}_\textbf{\textit{n}}(x)\right)^2}\mathbf{1}+ia^{(j)}_\textbf{\textit{n}}(x)\bm{\sigma}_3,\quad U_2^{(j)}(x)=U_3^{(j)}(x)=\mathbf{1}\right\}, } where $\textbf{\textit{n}}=(n_1,n_2,n_3)$, and \onelineequation{APW_a}{a^{(j)}_\textbf{\textit{n}}=\sqrt{\frac{\alpha_\textbf{\textit{n}}+\gamma_\textbf{\textit{n}}j}{L^3}}\cos\left(\frac{2\pi}{L}\textbf{\textit{n}}\cdot\textbf{\textit{x}}\right),\qquad j=1,2,\dots, 10.} Again, pairs of $(\alpha_\textbf{\textit{n}},\gamma_\textbf{\textit{n}})$ characterizing abelian plane waves with the wavenumber $\textbf{\textit{n}}$ in the above equations were carefully selected so that the actions of plane waves with different $j$ were not much different (to ensure reasonable Metropolis acceptance rates in the method described above). The expectation for APW configurations is \onelineequation{APWfit}{\displaystyle-\log(N^{(j)}_\textbf{\textit{n}}/N_\mathrm{tot})=R^{(j)}_\textbf{\textit{n}}+ \mbox{const.}= \oh(\alpha_\textbf{\textit{n}}+\gamma_\textbf{\textit{n}}j)\times{\omega(\textbf{\textit{n}})}+ \mbox{const.}} \section{The Comedy of Errors}\label{section4} \sectionquote{which showcases some results, discusses pitholes, and compares the results to the Ans\"atze.} Our aim is to compare computed relative weights of non-abelian constant and abelian plane-wave configurations with predictions of the DR, GO, and KKN-inspired wave functionals discussed in Section \ref{section2}. NAC configurations are not useful for that purpose. However, they served for ``calibrating'' our computer code by comparison with the results of Ref.~\cite{Greensite:1989aa}, obtained on lattices of much smaller size. For a number of $\beta$ values we determined the slope $\mu$ in Eq.~(\ref{NACfit}). Our data from $16^4$ and $20^4$ lattices clearly agree with those of Ref.~\cite{Greensite:1989aa} from $6^4$ and $8^4$. At small $\beta$ the strong-coupling prediction $\mu(\beta)=\beta$ is confirmed, in the scaling window $\mu(\beta)$ behaves as a physical quantity with the dimension of inverse mass: \onelineequation{mu_phys}{\mu(\beta)f(\beta)=\mu_\mathrm{phys}\approx 0.0269(3),} where \onelineequation{fbeta}{f(\beta)=\left(\frac{6\pi^2\beta}{11}\right)^\frac{51}{121}\exp\left(-\frac{3\pi^2\beta}{11}\right).} \onefigure{mu_vs_beta_NAC_c}{0.5\textwidth}{Variation of $\mu$ with $\beta$, estimated from data for NAC configurations on $16^4$ and $20^4$ lattices.}{t!} For a particular set of abelian plane waves with the wavenumber $\textbf{\textit{n}}$ one can determine the slope $\omega(\textbf{\textit{n}})$ from the measured values of relative weights of individual plane waves by a fit of the form~(\ref{APWfit}). The expected linear dependence was observed with all our data at all couplings, wave numbers, and parameter choices; for examples see Fig.~\ref{prob_k_a1_2p4_k010_l24_c_prob_k_a1_2p5_k015_l30_c}. \twofigures{prob_k_a1_2p4_k010_l24_c}{prob_k_a1_2p5_k015_l30_c}{0.48\textwidth}{$-\log(N^{(j)}_\textbf{\textit{n}}/N_\mathrm{tot})$ vs.\ $\oh(\alpha_\textbf{\textit{n}}+\gamma_\textbf{\textit{n}}j)$ for ${\mathcal{U}}_\mathrm{APW}$ [see Eq.~(\protect\ref{APW})]. }{b!} However, one could imagine that the dependence is linear only \textit{locally}, in a certain narrow window, and the slope $\omega(\textbf{\textit{n}})$ could depend strongly on the choice of parameters $(\alpha_\textbf{\textit{n}},\gamma_\textbf{\textit{n}})$. This does not seem to be the case, as exemplified in Fig.~\ref{prob_for_more_data_sets}. \onefigure{prob_for_more_data_sets}{0.5\textwidth}{The slope determined from $-\log(N_j/N_\mathrm{tot})$ does not strongly depend on the choice of parameters $\alpha$ and $\gamma$ for abelian plane waves. Eight sets of configurations at a given $\beta$ and wave-number are superimposed here; the last configuration in one set had identical amplitude with the first configuration in the next set. The measured mean values of $-\log(N_j/N_\mathrm{tot})$ were renormalized so that the value for the last configuration in one set coincided with that of the first configuration in the next set. The straight-line fit shown in the figure comes from the first data set (red open squares). The slopes obtained from other sets almost do not differ, the variation is at most 1\%, between 1.523 and 1.539.}{t!} The dependence of $\omega(\textbf{\textit{n}})$ on $\textbf{\textit{n}}$ can now be compared with expectations based on the DR, GO, and KKN-inspired VWFs. We performed the following fits: \onelineequation{fits}{{\omega(\textbf{\textit{n}})}=\left\{ \begin{array}{l c l} a+{b}k^2(\textit{\textbf{n}}) & \qquad\dots\qquad & \underline{\mbox{DR}},\\[2mm] {\displaystyle{c}\frac{k^2(\textit{\textbf{n}})}{\sqrt{k^2(\textit{\textbf{n}})+{m}^2}}} & \dots & \underline{\mbox{GO}},\\[2mm] {\displaystyle{c}\frac{k^2(\textit{\textbf{n}})}{\sqrt{k^2(\textit{\textbf{n}})+{m_1}^2}+{m_2}}} & \dots & \underline{\mbox{inspired by KKN}}, \end{array}\right. where \onelineequation{momentum}{k^2(\textit{\textbf{n}})=2\sum_i\left(1-\cos\frac{2\pi n_i}{L}\right).} In the KKN-inspired fit we introduced two fit mass parameters, $m_1$ and $m_2$, instead of just $m$, \textit{cf.}\ Eq.~(\ref{KKN3}). We then performed a fit with both parameters free, and a constrained fit with $m_1=m_2$. It turned out that the former had a lower $\chi^2$ and the preferred value of $m_1$ was close to 0. Prototype plots for fits of the form (\ref{fits}) are displayed in Fig.~\ref{omega_2_5_l30_c_omega_2_5_l30_KKN_c} for the DR and GO forms (left panel), and for the KKN-inspired forms (right panel). All forms in Eq.~(\ref{fits}) describe the data reasonably at low plane-wave momenta, none of them is satisfactory for larger momenta. \twofigures{omega_2_5_l30_c}{omega_2_5_l30_KKN_c}{0.48\textwidth}{$\omega(\textbf{\textit{n}})$ vs.\ $k(\textbf{\textit{n}})$ for ${\cal U}_\mathrm{APW}$ sets, with the DR and GO fits (left), and ``KKN-inspired'' fits (right).}{t!} The agreement with data greatly improves at all couplings by adding another parameter $d$ to the GO form: \onelineequation{best}{{\omega(\textbf{\textit{n}})}={c}\frac{k^2(\textit{\textbf{n}})}{\sqrt{k^2(\textit{\textbf{n}})+{m}^2}}\left[1+{d}k(\textit{\textbf{n}})\right]. see Fig.~\ref{omega_2_5_l30_guess_c}. This would correspond in the continuum limit to the following choice of the kernel in~(\ref{GIVWF}): \onelineequation{guess-kernel}{{\cal{K}}^{ab}_{xy}[-{\cal{D}}^2]\propto\left(\frac{1}{\sqrt{-{\cal{D}}^2-\lambda_0+m_\mathrm{phys}^2}}+{d_\mathrm{phys}} \sqrt{\frac{-{\cal{D}}^2-\lambda_0}{{-{\cal{D}}^2-\lambda_0+m_\mathrm{phys}^2}}}\right)^{ab}_{xy}.} \onefigure{omega_2_5_l30_guess_c}{0.5\textwidth}{$\omega(\textbf{\textit{n}})$ vs.\ $k(\textbf{\textit{n}})$ for ${\cal U}_\mathrm{APW}$ sets, with the best fit of the form (\protect\ref{best}).}{b!} \begin{figure}[p] \centering \includegraphics[width=0.5\textwidth]{NAC_vs_APW_for_Mainz} \caption{The combination $(2c/m)f(\beta)$ of the best fit to data, Eq.~(\protect\ref{best}). Also displayed is $\mu f(\beta)=0.0269(3)$ derived from non-abelian constant configurations.}\label{scaling1} \end{figure \begin{figure}[p] \centering \begin{tabular}{c c} \includegraphics[width=0.48\textwidth]{c_vs_beta_c}&\includegraphics[width=0.48\textwidth]{m_vs_beta_c} \end{tabular} \caption{The parameter $c$ (left), and the rescaled parameter $m/f(\beta)$ (right) of the best fit, Eq.~(\protect\ref{best}), vs.~$\beta$.}\label{scaling2} \end{figure \begin{figure}[p] \centering \includegraphics[width=0.5\textwidth]{d_vs_beta_c} \caption{The rescaled parameter $d f(\beta)$ of the best fit, Eq.~(\protect\ref{best}), vs.~$\beta$.}\label{scaling3} \end{figure For small-amplitude constant configurations the forms of the VWF in Eqs.~(\ref{DR}) and (\ref{GO}) coincide. It is therefore an important consistency check whether the value of $\mu_\mathrm{NAC}$ determined from sets of non-abelian constant configurations agrees with the appropriate combination of parameters obtained for abelian plane waves. In particular, one expects: \onelineequation{consistency}{\mu_\mathrm{NAC}=\left(\frac{2c}{m}\right)_\mathrm{APW}.} As seen convincingly in Fig.~\ref{scaling1}, our results clearly pass this nontrivial check. If the parameters of the best fit, Eq.~(\ref{best}), correspond to physical quantities in the continuum limit, they should scale correctly when multiplied by the appropriate power of the function $f(\beta)$, Eq.~(\ref{fbeta}). The behaviour of $[2c(\beta)/m(\beta)]f(\beta)$, $c(\beta)$, $m(\beta)/f(\beta)$, and $d(\beta)f(\beta)$ vs.\ the coupling~$\beta$ is displayed in Figs.~\ref{scaling1}, \ref{scaling2} and \ref{scaling3}. While the scaling of $(2c/m)$ is almost perfect (Fig.~\ref{scaling1}), it is not convincing for $c$ and $m$ separately (Fig.~\ref{scaling2}), though the variation over the range of $\beta = 2.2\div2.5$ is not so large. On the contrary, $d(\beta)f(\beta)$ falls down considerably over the same range (Fig.~\ref{scaling3}). The data thus indicate that the physical value of $d$ vanishes in the continuum limit. This suggests an idea that the form of the VWF, Eq.~(\ref{GO}), proposed in Ref.~\cite{Greensite:2007ij}, might be recovered in the continuum limit. \section{All's Well That Ends Well (?)}\label{section5} \sectionquote{wherein some optimistic and pessimistic conclusions are formulated.} Let's group the messages of this work into two categories: \begin{center} \begin{tabular}{p{0.46\textwidth} p{0.0\textwidth} p{0.46\textwidth}} \centerline{\fbox{\textbf{\textit{Pluses}}}}&&\centerline{\fbox{\textbf{\textit{Minuses}}}}\\ There is a method to measure (on a lattice) relative probabilities of various gauge-field configurations in the Yang--Mills vacuum. && The method works reasonably well for configurations rather close in configuration space.\\[3mm] Both for nonabelian constant and for long-wavelength abelian plane-wave configurations the measured probabilities are consistent with the dimensional reduction form, and the coefficients $\mu$ for these sets agree. && Neither the dimensional-reduction form of the vacuum wave functional, nor our proposal, nor the forms inspired by the work of Karabali \textit{et al.\/}, describe the data satisfactorily for larger plane-wave momenta. \\[3mm] The data are nicely described by a modification of our proposal, and the correction term may vanish in the continuum limit.&& The configurations tested so far, both nonabelian constant and abelian plane-wave configurations, are rather atypical, not representatives of true vacuum fields.\\[3mm] && One badly needs a method of generating configurations distributed according to the proposed vacuum wave functionals.\\[3mm] \end{tabular} \end{center} We presented here only a selection of our results, for more details consult Ref.~\cite{Greensite:2013zz}. Preliminary results were also presented at other conferences \cite{Greensite:2013nb}. \begin{acknowledgments} \sectionquote{wherein I thank all who should be thanked, sincerely hoping nobody is forgotten.} I am grateful to the organizers for arranging this most pleasant and inspiring workshop and for inviting me to participate and present this talk. I acknowledge cooperation with Hugo Reinhardt and Adam Szczepaniak which resulted in Ref.~\cite{Greensite:2011pj}. Pierre van Baal's sentence \textit{``Who thought so much can be said about nothing.''} in the concluding section of his lecture on the QCD vacuum at the Lattice'97 conference \cite{vanBaal:1997vi} inspired the subtitle of my talk. I was lucky that William Shakespeare had written enough comedies to choose my subtitle and section names from. This research was supported in part by the U.S.\ Department of Energy under Grant No.\ DE-FG03-92ER40711 (J.G.), by the Slovak Research and Development Agency under Contract No.\ APVV--0050--11, and by the Slovak Grant Agency for Science, Project VEGA No.\ 2/0072/13 (\v{S}.O.). In initial stages of this work, \v{S}.O.\ was also supported by ERDF OP R\&D, Project meta-QUTE ITMS 2624012002. \end{acknowledgments}
1311.5009
\section*{References}
2101.08475
\section{Introduction} Eclipsing binaries (EB) with a white dwarf or hot subdwarf component belong to rather curious stellar systems. Their typical light curves with a deep and narrow primary minimum and strong reflection effect are clear sign allowing the simple identification of these unique objects. The difference between primary and secondary surface temperatures is typically 25~000 -- 30~000~K. Short orbital periods of up to 0.5 days are very sensitive to any changes caused possibly by a mass transfer between components, magnetic field changes in a late-type secondary, or the presence of an unseen third body. Moreover, many low-mass stars often show phenomena associated with magnetic activity, such as flares and star spots. The small size of the binary components enables us to determine the eclipse times of this type of binary system with high precision (up to seconds). Therefore, very small amplitude variations in the orbital period can be detected by analyzing the observed-minus-calculated (O-C) diagram or eclipse-time-variation (ETV) curve. This makes them very promising targets in the search for circumbinary brown dwarfs or giant planets by analyzing the light-time effect (LITE). Several discoveries of planetary-mass companions orbiting the post-common envelope binaries (PCEB) and cataclysmic variables (CV) were announced in the past; for example, RR~Cae \citep{2012MNRAS.422L..24Q}, DE CVn \citep{2018ApJ...868...53H}, HW~Vir \citep{2012A&A...543A.138B}, and HS~0705+6700 \citep{2012A&A...540A...8B, 2013MNRAS.436.1408Q}. The origin of dwarf binaries and their multiple systems is still an unresolved question in star formation theory. The discovery of circumbinary objects, planets, or brown dwarfs, as well as their identification and characterization is also highly relevant to recent exoplanet studies in low-mass multiples \citep{2013MNRAS.429L..45P, 2014MNRAS.438..307H}. Here, we report on a long-term mid-eclipse time campaign of three similar EBs containing a subdwarf or white dwarf primary component (sdB or WD). All these systems are relatively well-known northern hemisphere objects whose uninterrupted observations last almost 20 years. Their short orbital periods are up to four hours, and important spectroscopic observations have been published for all of them. This paper is a continuation of our previous period study of low-mass eclipsing binaries presented in \cite{2016A&A...587A..82W, 2018A&A...620A..72W}. \begin{table*} \caption{Observational log of selected eclipsing binaries.} \label{obs} \begin{tabular}{llccccccc} \hline\hline\noalign{\smallskip} System & Abbreviation & Type & Observed & Exposure & Filter & Number & Number \\ & used in paper & & since & time [s] & & of frames & of minima \\ \noalign{\smallskip}\hline \noalign{\smallskip} SDSS J143547.87+373338.5 & S1435 & PCEB & Apr 2012 & 30 & C & 1210 & 30 \\ NSVS 07826147 & N782 & sdB+M & Feb 2012 & 30 & R & 2810 & 50 \\ NSVS~14256825 & N1425 & PCEB & Jul 2009 & 30 & R & 2118 & 41 \\ \noalign{\smallskip}\hline \end{tabular} \end{table*} \section{Photometry of eclipses} Since 2009, we have accumulated over 6~000 photometric measurements mostly during primary eclipses and derived 121 new precise times of minimum light for all three systems. The CCD photometry was obtained primarily at the Ond\v{r}ejov\ Observatory, Czech Republic, using the Mayer 0.65-m ($f/3.6$) reflecting telescope with the CCD camera G2-3200 and photometric R filter, or without a filter. Such a long-term monitoring campaign with identical equipment is not frequent in current photometrical surveys (see Table~\ref{obs} for details of our observations). A standard calibration (dark frame, flat field) was applied to the obtained CCD frames. The {\sc Aphot}, a synthetic aperture photometry and astrometry software, was routinely used for our time series. Alternatively, {\sc C-Munipack}\footnote{\url{http://c-munipack.sourceforge.net/}.} was used by observers to reduce time series over several nights. Differential photometry was carried out using selected nearby comparison and check stars. Concerning the other photometric procedures, observational circumstances, and data handling (e.g., time synchronization during observation, accurate mid-eclipse time determination, conversion to the barycentric Julian date dynamical time (BJD$_{\rm TDB}$), and adopted weighting of individual times), we invite the reader to consult our last paper, \cite{2018A&A...620A..72W}. \section{Eclipse time variations} An orbiting circumbinary body in an eclipsing binary can be detected by the well-known light-time effect (LITE) as a result of delays or advances in the timings of a minimum light. This effective tool was historically introduced by \cite{1952ApJ...116..211I, 1959AJ.....64..149I}, who also described a simple fitting procedure for the elements of the light-time orbit. To calculate the LITE, the suitable equations were presented by \cite{1990BAICz..41..231M}. An interesting review on many applications of popular O-C diagrams in various astrophysical contexts can be found in \citet{2005ASPC..335.....S}. There are seven independent variables to be determined in this procedure. These are as follows: the orbital period of the binary, $P_s$, the orbital period of the third body, $P_3$, the semi-amplitude of LITE, $A$, the eccentricity of the outer orbit, $e_3$, and the periastron passage time of the third body, $T_3$. The zero epoch is given by $T_0$, and the corresponding position of the periastron of the third-body orbit is given by $\omega_3$. \begin{table} \caption{New minima timings of S1435.} \label{m1435} \begin{tabular}{clcc} \hline\hline\noalign{\smallskip} BJD$_{\rm TDB}$ -- & Error & Epoch & Weight \\ 24 00000 & [day] & & \\ \noalign{\smallskip}\hline \noalign{\smallskip} 56026.386088 & 0.0001 & 14946.0 & 5 \\ 56052.517278 & 0.0001 & 15154.0 & 5 \\ 56055.406808 & 0.0001 & 15177.0 & 5 \\ 56141.338248 & 0.0001 & 15861.0 & 5 \\ 56398.379163 & 0.00001 & 17907.0 & 10 \\ 56436.445252 & 0.0001 & 18210.0 & 5 \\ 56642.731207 & 0.00001 & 19852.0 & 10 \\ 56711.451305 & 0.00001 & 20399.0 & 10 \\ 56718.486595 & 0.00001 & 20455.0 & 10 \\ 56827.408563 & 0.00001 & 21322.0 & 10 \\ 57467.498042 & 0.0001 & 26417.0 & 5 \\ 57482.573733 & 0.00001 & 26537.0 & 10 \\ 57516.368333 & 0.0001 & 26806.0 & 5 \\ 57531.444162 & 0.00001 & 26926.0 & 10 \\ 57725.669531 & 0.00001 & 28472.0 & 10 \\ 57780.570243 & 0.00001 & 28909.0 & 10 \\ 57980.323354 & 0.00001 & 30499.0 & 10 \\ 58171.533757 & 0.00001 & 32021.0 & 10 \\ 58270.405389 & 0.0001 & 32808.0 & 5 \\ 58337.366630 & 0.00001 & 33341.0 & 10 \\ 58529.456525 & 0.00001 & 37870.0 & 10 \\ 58530.712795 & 0.0001 & 34880.0 & 10 \\ 58532.471465 & 0.00001 & 34894.0 & 10 \\ 58550.688076 & 0.0001 & 35039.0 & 5 \\ 58627.574378 & 0.00001 & 35651.0 & 10 \\ 58663.504759 & 0.00001 & 35937.0 & 10 \\ 58933.360318 & 0.0001 & 38085.0 & 5 \\ 58976.451720 & 0.0001 & 38428.0 & 5 \\ 59074.318353 & 0.00002 & 39207.0 & 10 \\ 59215.527737 & 0.00002 & 40331.0 & 10 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Generally, the LITE method is more sensitive to companions on a long-period orbit, and its semi-amplitude is proportional to the mass and period of the third body as $$ A \sim M_3\ P_3^{2/3}. $$ Moreover, low-mass binary components favor the detection of low-mass companions on short-period orbits \citep{2012AN....333..754P}. On the other hand, in the case of a shorter orbital period of the third body (usually less than one year), small dynamical perturbations of the inner orbit can occur that also create additional changes in the observed times (see \cite{2011A&A...528A..53B, 2016MNRAS.455.4136B}). \subsection{SDSS J143547.87+373338.5} The detached eclipsing binary SDSS~J143547.87+373338.5 (also WD~1433+377, $G$ = 17\m0, Sp. DA+M4.5, GAIA parallax $5.45\pm 0.07$ mas) is rather faint but relatively well-studied object with a short orbital period of about three hours. It belongs to a post-common envelope binary (PCEB) type, which contains a white dwarf primary and a red dwarf secondary. As mentioned in the literature, S1435 is probably a pre-CV system just at the upper edge of the known period gap of cataclysmic variables. Short eight-minute eclipses were discovered by \cite{2008ApJ...677L.113S}, who also estimated the first intervals of preliminary absolute parameters: $M_1$ = 0.35–0.58~M$_{\odot}$, $R_1$ = 0.0132–0.0178~R$_{\odot}$, $M_2$ = 0.15–0.35~M$_{\odot}$\ and $R_2$ = 0.17–0.32~R$_{\odot}$. The spectroscopic parameters were later improved by \cite{2009MNRAS.394..978P}, who found a consistent set of absolute elements: $M_1$ = 0.48–0.53~M$_{\odot}$, $R_1$ = 0.014–0.015~R$_{\odot}$, $M_2$ = 0.19–0.25~M$_{\odot}$\ and $R_2$ = 0.22–0.25~R$_{\odot}$. The last period study of S1435 was presented by \cite{2016ApJ...817..151Q}, who announced a rapid decreasing of the orbital period at a rate of about $-8 \cdot 10^{-11}$ s s$^{-1}$. As an alternative scenario, they also proposed the LITE caused by an unseen brown dwarf orbiting the eclipsing pair with the period of 7.72~years. Our observations presented here cover the time span of about 25~000 epochs, which corresponds to 8.5~years. Using our newly derived eclipse times listed in Table~\ref{m1435} together with those obtained by \cite{2008ApJ...677L.113S}, \cite{2009MNRAS.394..978P}, and \cite{2016ApJ...817..151Q}, we improved the LITE elements given in Table~\ref{t2}. The following linear light elements were used for epoch calculation: \begin{center} Pri.Min. = BJD 24 54148.70395(2) + 0.125630981(5) $\cdot\ E$. \end{center} \noindent A total of 48 accurate mid-eclipse times of primary minimum were included in our analysis. The computed LITE parameters and their internal errors of the least-squares fit are given in Table~\ref{t2}. The current O-C diagram is plotted in Fig.~\ref{s1435}, where the sinusoidal trend is clearly visible. The nonlinear prediction, corresponding to the fit parameters, is plotted as a continuous blue curve. One whole orbital period of the possible third body is now covered by CCD measurements. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{s1435.pdf} \caption[]{Current O-C diagram for the eclipse times of S1435. The blue sinusoidal curve represents the LITE with the short period of about 13~years and a well-defined semi-amplitude of 54~sec. The individual primary minima are denoted by circles. } \label{s1435} \end{figure} \begin{table*} \caption{LITE parameters for S1435 and N782 (with errors of the last digit in parentheses). } \begin{tabular}{cccccc} \hline\hline\noalign{\smallskip} Element & Unit & S1435 & N782 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $T_0$ & BJD-2400000 & 54148.70395(2) & 55611.92657(3) \\ $P_s$ & days & 0.125630981(5) & 0.161770446(2) \\ $P_3$ & days & 4765(85) & 3820(140) \\ $P_3$ & years & 13.0(0.3) & 10.5(0.4) \\ $e_3$ & -- & 0.05(4) & 0.0 \\ $A$ & days & 0.00063(2) & 0.000050(3) \\ $A$ & sec & 54.4(1.7) & 4.3(0.3) \\ $\omega_3$ & deg & 23.7(3.0) & 204.1(2.5) \\ $T_3$ & JD-2400000 & 54930(20) & 50155(20) \\ $a_{12}\sin i$ & au & 0.109 & 0.0087 \\ $\sum{w\ (O-C)^2}$ & day$^2$ & $ 3.4\cdot10^{-7}$ & $1.4\cdot10^{-7}$ \\ \noalign{\smallskip}\hline \end{tabular} \label{t2} \end{table*} \subsection{NSVS 07826147 CrB = DD CrB } \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{n782crb.pdf} \caption[]{Present O-C diagram for the eclipse times of N782. The sinusoidal curve represents the LITE with the short period of about ten years and a very small semi-amplitude of 4.3 sec. The individual primary minima are denoted by circles, the secondary by triangles. The first group of mid-eclipse times close to the cycle $-15~000$ derived by \cite{2014A&A...566A.128L} from the SuperWASP database was not taken in consideration due to the large scatter of these data.} \label{n782} \end{figure} \begin{table}[h!] \caption{New minima timings of N782. } \label{m782} \begin{tabular}{llrc} \hline\hline\noalign{\smallskip} BJD$_{\rm TDB}$ -- & Error & Epoch & Weight \\ 24 00000 & [day] & & \\ \noalign{\smallskip}\hline \noalign{\smallskip} 55960.703707 & 0.00001 & 2156.0 & 10 \\ 55963.534637 & 0.0002 & 2173.5 & 0 \\ 55966.527447 & 0.00001 & 2192.0 & 10 \\ 56055.339417 & 0.00001 & 2741.0 & 10 \\ 56077.421107 & 0.0002 & 2877.5 & 0 \\ 56162.350527 & 0.0002 & 3402.5 & 0 \\ 56162.350487 & 0.0002 & 3402.5 & 0 \\ 56184.270457 & 0.00001 & 3538.0 & 10 \\ 56366.424034 & 0.00001 & 4664.0 & 5 \\ 56394.410294 & 0.00001 & 4837.0 & 10 \\ 56461.383233 & 0.00001 & 5251.0 & 10 \\ 56482.413413 & 0.00001 & 5381.0 & 10 \\ 56706.627268 & 0.00001 & 6767.0 & 10 \\ 56714.554008 & 0.00001 & 6816.0 & 10 \\ 56738.495997 & 0.00001 & 6964.0 & 10 \\ 56747.393407 & 0.00001 & 7019.0 & 10 \\ 56809.351486 & 0.00001 & 7402.0 & 10 \\ 56858.367895 & 0.00001 & 7705.0 & 10 \\ 56945.238653 & 0.00001 & 8242.0 & 10 \\ 57059.610360 & 0.00001 & 8949.0 & 10 \\ 57185.467748 * & 0.00001 & 9727.0 & 10 \\ 57503.346624 & 0.00001 & 11692.0 & 10 \\ 57764.605901 & 0.00001 & 13307.0 & 10 \\ 57853.417861 & 0.00001 & 13856.0 & 10 \\ 57892.404551 * & 0.00001 & 14097.0 & 10 \\ 57918.449511 & 0.00001 & 14258.0 & 10 \\ 57926.376312 & 0.00001 & 14307.0 & 10 \\ 57957.436203 & 0.00001 & 14499.0 & 10 \\ 57966.333591 & 0.00001 & 14554.0 & 10 \\ 57988.334321 & 0.00001 & 14690.0 & 10 \\ 57994.319881 & 0.00001 & 14727.0 & 10 \\ 58178.576424 & 0.00001 & 15866.0 & 10 \\ 58258.491005 & 0.00001 & 16360.0 & 10 \\ 58330.478886 & 0.00001 & 16805.0 & 10 \\ 58604.356211 & 0.00001 & 18498.0 & 10 \\ 58727.301773 & 0.00001 & 19258.0 & 10 \\ 58942.618229 & 0.00001 & 20589.0 & 10 \\ 58946.338959 & 0.00001 & 20612.0 & 10 \\ 58946.500699 & 0.00001 & 20613.0 & 10 \\ 58955.88340 \dag & 0.00001 & 20671.0 & 10 \\ 58956.04517 \dag & 0.00001 & 20672.0 & 10 \\ 58965.4278 ** & 0.0001 & 20730.0 & 5 \\ 58968.33972 \dag & 0.00001 & 20748.0 & 10 \\ 58969.31037 \dag & 0.00001 & 20754.0 & 10 \\ 58981.92845 \dag & 0.00001 & 20832.0 & 10 \\ 58982.25196 \dag & 0.00001 & 20834.0 & 10 \\ 59074.461113 & 0.00001 & 21404.0 & 10 \\ 59089.344004 & 0.00001 & 21496.0 & 10 \\ 59108.271134 & 0.00001 & 21613.0 & 10 \\ 59159.228956 \ddag & 0.00001 & 21928.0 & 10 \\ \noalign{\smallskip}\hline \end{tabular} \tablefoot{ * Vala\v{s}sk\'e Mezi\v{r}\'{\i}\v{c}\'{\i}, ** Veversk\'a B\'it\'y\v{s}ka, and \ddag\ MUO \\ observatories, Czech Republic, \dag\ \textit{TESS} photometry.} \end{table} The detached eclipsing binary NSVS~7826147 (also DD~CrB, FBS~1531 +381, 2MASS~J15334944 +3759282, CSS~6833, $V_{\rm max}$ = 13\m08, Sp. sdB+dM, GAIA parallax $1.9\pm 0.05$ mas) is a well-known northern and low-mass binary system with a very short orbital period ($P=0.16$ d). It was mentioned originally in the First Byurakan Spectral Sky Survey \citep{1990Afz....33..213A} as a blue stellar object. The eclipsing nature of the system was discovered by \cite{2007JSARA...1...13K} in the Northern Sky Variability Survey \citep[NSVS;][]{2004AJ....127.2436W}. \cite{2010Ap&SS.329..107L} measured 16 additional times of minimum and improved the orbital period of this binary ($P$ = 0\fd16177046). The first photometric and spectroscopic analysis of N782 was presented by \cite{2010ApJ...708..253F}, who derived the precise physical parameters of both components: the sdB primary mass is $M_1= 0.376\pm0.055$ M$_{\odot}$\ and its radius is $R_1= 0.166\pm0.007$ R$_{\odot}$, and the secondary has $M_2= 0.113\pm0.017$ M$_{\odot}$, $R_2= 0.152\pm0.005$ R$_{\odot}$, consistent with a main-sequence M5~star. \cite{2012A&A...538A..84B} determined a further seven times of minimum, and later \cite{2014A&A...566A.128L} provided additional timings from the SuperWASP database, extending the time span back to 2004. \cite{2015PKAS...30..289Z} were probably first to announce the detection of a cyclical change in the period of this system; this was confirmed by \cite{2015AcPPP...2..183Z}, who reported that this periodic change could be caused by an unseen circumbinary object of mass greater than 4.7 M$_{\rm Jup}$\ with an orbital radius of 0.64~au, introducing a LITE effect of 0.00004 days (3.5 s). Neither publication states a period, but \cite{2015AcPPP...2..183Z} suggested 11~000 cycles, equivalent to 4.9~years. The last study of N782 based on long term photometry and next timings was presented by \cite{2017ApJ...839...39L}. They also derived the precise absolute parameters of both components: $M_1$ = 0.442(12)~M$_{\odot}$, $R_1$ = 0.172(2)~R$_{\odot}$, $M_2$ = 0.124(5)~M$_{\odot}$\ and $R_2$ = 0.157(2)~R$_{\odot}$. They concluded that the orbital period of the system had remained constant for the past 12~years. Finally, this interesting PCEB object was also included in the {\sc MUCHFUSS} photometric campaign \citep{2018A&A...614A..77S} and a new period study of \cite{2018A&A...611A..48P}, who also concluded that no significant variations are visible on the O-C diagram. The following linear light elements were derived in the last mentioned paper: \begin{center} Pri.Min. = BJD 24 55611.92655(1) + 0\fd161770449(2) $\cdot\ E$. \end{center} \noindent All our new eclipse times are listed in Table~\ref{m782}. In addition, we used the high-precision data obtained by the Transiting Exoplanet Survey Satellite \citep[TESS,][]{2015JATIS...1a4003R}. Our target was observed in Sector~24 in 2-minute cadence mode during April and May 2020. We derived six new times from the beginning, middle, and the end of this period. They are also listed in Table~\ref{m782}. All TESS minima perfectly fit the $O-C$\ curve. Four additional mid-eclipse times of N782 were observed at Vala\v{s}sk\'e Mezi\v{r}\'{\i}\v{c}\'{\i}, Veversk\'a B\'it\'y\v{s}ka\ and Masaryk University observatories in the Czech Republic. All together, 284 reliable times of primary minimum light were included to our analysis, and the shallow and less precise secondary eclipses were not included due to a large scatter. The O-C diagram is shown in Fig.~\ref{n782}, and the computed LITE parameters and their internal errors of the least-squares fit are given in Table~\ref{t2}. The nonlinear prediction, corresponding to the fit parameters, is plotted as a continuous violet curve in Fig.~\ref{n782}. \subsection{NSVS 14256825 = V1828 Aql} The third eclipsing binary NSVS~14256825 (also V1828~Aql, 2MASS~J20200045+0437564, UCAC2~33483055, USNO-B1.0 0946-0525128, $G$ = 13\m2, Sp. sdOB+M, GAIA parallax $1.1 9\pm 0.06$ mas) is one of the well-known HW Vir-type systems with a short orbital period of 2.65 hours containing a very hot subdwarf B or OB primary. Its light variability was found in the NSVS data \citep{2004AJ....127.2436W}. \cite{2007IBVS.5800....1W} performed the multicolor CCD observations and derived the first mid-eclipse times of N1425. Later, \cite{2012MNRAS.423..478A} analyzed multicolor photometry and the radial velocity curve simultaneously using the Wilson-Devinney code, and they provided the following fundamental parameters of N1425: $M_1 = 0.419 \pm 0.070$ M$_{\odot}$, $M_2 = 0.109 \pm 0.023$ M$_{\odot}$, $R_1 = 0.188 \pm 0.010$ R$_{\odot}$, $R_2 = 0.162 \pm 0.008$ R$_{\odot}$, and $i = 82.5 \pm 0.3$ deg. They also claimed that N1425 is the sdOB + dM eclipsing binary. \cite{2010Ap&SS.329..113Q} and \cite{2011ASPC..451..155Z} found the first hint of a cyclic period change in this system. \cite{2012MNRAS.421.3238K} discovered that the orbital period of N1425 is rapidly increasing at a rate of about $12 \cdot 10^{-12}$ days per orbit. On the other hand, \cite{2012A&A...540A...8B} reported that there may be a giant planet with a mass of roughly 12~M$_{\rm Jup}$\ in N1425. Moreover, \cite{2013ApJ...766...11A} revisited the O-C diagram of N1425 and explained the variations in $O-C$\ curve by the presence of two circumbinary bodies with masses of 8.1 M$_{\rm Jup}$\ and 2.9 M$_{\rm Jup}$. \cite{2013MNRAS.431.2150W} presented a dynamical analysis of the orbital stability of the model suggested by \cite{2013ApJ...766...11A}. They found that the two-planet model in N1425 is unstable on timescales of less than a thousand years. Later, \cite{2014MNRAS.438..307H} concluded that the insufficient coverage of timing data prevents the reliable constrain. Next, \cite{2017AJ....153..137N} published times of minimum light and extended the time interval up to 2016. They ruled out the two-planet model and reported a cyclic change that was explained as the presence of a brown dwarf. However, their data still do not cover a full orbital cycle. Recently, \cite{2019RAA....19..134Z}, in their comprehensive period study based on numerous new mid-eclipse times, claimed that cyclic change detected in N1425 could be explained by the LITE caused by the presence of a third body. The minimal mass was determined as 14.15~M$_{\rm Jup}$\ close to a giant planet or a brown dwarf. There is also a long research history of this unique object summarized in the last mentioned paper. Table~\ref{m1425} contains new mid-eclipse times obtained mostly in Ond\v{r}ejov. Several earlier eclipse measurements were obtained at Masaryk University Observatory (MUO) during the summer of 2009. The 0.6-m reflecting telescope and the CCD camera SBIG ST-8 were used. One additional eclipse light curve of N1425 and precise mid-eclipse time was obtained by PZ at San Pedro M\'artir Observatory, UNAM, Baja California, Mexico, with the 0.84-m telescope in August 2009 (JD 24~55050). The following linear light elements were adopted for calculation of epochs, other columns are self-explanatory: \begin{center} Pri.Min. = BJD 24 54274.20917 + 0\fd110374106 $\cdot\ E$. \end{center} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{n1425.pdf} \caption{Actual O-C diagram for the eclipse times of N1425 (the last minimum obtained in November~2020). The individual primary minima are denoted by circles, the secondary by triangles. The blue sinusoidal curve with the short period of about 14~years clearly fits all data up to the epoch 40~000 (September~2019), but it does not follow the last mid-eclipse times measured after this date (red dashed curve). } \label{n1425} \vspace{5mm} \includegraphics[width=\columnwidth]{n1425-3.pdf} \caption[]{ O-C2 diagram for N1425 after subtraction of a sinusoidal term of the possible third body. The violet curve represents additional cyclic variations with a period of about 5 years and an amplitude of 4 sec. The present rapid period decrease is denoted by red dashed curve. } \label{n1425-3} \end{figure} \begin{table}[t] \caption{New minima timings of N1425.} \label{m1425} \begin{tabular}{llrc} \hline\hline\noalign{\smallskip} BJD$_{\rm TDB}$ -- & Error & Epoch & Weight \\ 24 00000 & [day] & & \\ \noalign{\smallskip}\hline \noalign{\smallskip} 55021.44152* & 0.0001 & 6770.0 & 5 \\ 55034.46565* & 0.0001 & 6888.0 & 5 \\ 55034.57601* & 0.0001 & 6889.0 & 5 \\ 55050.911367**& 0.00001 & 7037.0 & 10 \\ 55053.45005* & 0.0001 & 7060.0 & 5 \\ 55358.468971 & 0.0001 & 9823.5 & 0 \\ 55373.424740 & 0.00001 & 9959.0 & 10 \\ 55392.409097 & 0.00001 & 10131.0 & 10 \\ 55784.347812 & 0.00001 & 13682.0 & 10 \\ 55796.37871* & 0.00001 & 13791.0 & 5 \\ 55796.48924* & 0.00001 & 13792.0 & 5 \\ 56107.413167 & 0.00001 & 16609.0 & 10 \\ 56133.351109 & 0.00001 & 16844.0 & 10 \\ 56179.377131 & 0.00001 & 17261.0 & 10 \\ 56482.574878* & 0.00001 & 20008.0 & 10 \\ 56494.440068 & 0.0001 & 20115.5 & 0 \\ 59494.495258 & 0.00001 & 20116.0 & 10 \\ 56529.373489 & 0.00001 & 20432.0 & 10 \\ 56613.257809 & 0.00001 & 21192.0 & 10 \\ 56824.458511 & 0.0001 & 23105.5 & 0 \\ 56824.513711 & 0.00001 & 23106.0 & 10 \\ 56876.499861 & 0.00001 & 23577.0 & 10 \\ 57219.542209 & 0.00001 & 26685.0 & 10 \\ 57230.358889 & 0.00001 & 26783.0 & 10 \\ 57240.347829 & 0.0001 & 26873.5 & 0 \\ 57297.355838 & 0.00001 & 27390.0 & 10 \\ 57616.336669 & 0.00001 & 30280.0 & 10 \\ 57645.365088 & 0.00001 & 30543.0 & 10 \\ 57696.247496 & 0.00001 & 31004.0 & 10 \\ 58044.256934 & 0.00001 & 34157.0 & 10 \\ 58349.331172 & 0.00002 & 36921.0 & 10 \\ 58423.281879 & 0.00001 & 37591.0 & 10 \\ 58437.188979 & 0.00001 & 37717.0 & 10 \\ 58781.335569 & 0.00001 & 40835.0 & 10 \\ 59062.458435 & 0.00001 & 43382.0 & 10 \\ 59097.336585 & 0.00001 & 43698.0 & 10 \\ 59108.373995 & 0.00001 & 43798.0 & 10 \\ 59110.360725 & 0.00001 & 43816.0 & 10 \\ 59143.252174 & 0.0001 & 44114.0 & 5 \\ 59147.225634 & 0.00001 & 44150.0 & 10 \\ 59168.196694 & 0.00001 & 44340.0 & 10 \\ \noalign{\smallskip}\hline \end{tabular} \tablefoot{* MUO, Brno, Czech Republic, ** San Pedro M\'artir Observatory, UNAM, Mexico. } \end{table} \noindent A total of 310 mid-eclipse times were included to our analysis. As in a previous analysis, the secondary minima were not included due to their lower accuracy. The corresponding O-C diagram is plotted in Fig.~\ref{n1425}, where the cyclical change with a period of about 14 years is clearly visible. The best fit is plotted as a continuous blue curve. As one can see, the mid-eclipse times after the epoch 40~000 (September 2019) do not follow the predicted sinusoidal trend and a rapid period decrease is significant. The O-C2 diagram after subtraction of previous sinusoidal term is plotted in Fig.~\ref{n1425-3}. Additional cyclic variations with a period of 5 years and a small amplitude of about 4 sec are remarkable in these residuals. Thus, a multiple companion system or more complicated process caused by as yet unknown effects can likely explain the period changes of this PCEB binary. In any case, a single third body orbiting the eclipsing pair is not sufficient to describe the current shape of the O-C diagram. \begin{table} \caption{Physical properties of S1435 and N782 and parameters of their possible third bodies.} \begin{center} \begin{tabular}{cccc} \hline\hline\noalign{\smallskip} Parameter & Unit & S1435 & N782 \\ \hline\noalign{\smallskip} $M_1$ & M$_{\odot}$ & 0.50(2) & 0.442(12) \\ $M_2$ & M$_{\odot}$ & 0.21(3) & 0.124(5) \\ $R_1$ & R$_{\odot}$ & 0.0145(2) & 0.172(2) \\ $R_2$ & R$_{\odot}$ & 0.23(2) & 0.157(2) \\ \hline\noalign{\smallskip} Source & & \cite{2009MNRAS.394..978P} & \cite{2017ApJ...839...39L} \\ \hline\noalign{\smallskip} $f(m_3)$ & M$_{\odot}$ & $7.6\cdot 10^{-6}$ & $6.0\cdot 10^{-9}$ \\ $M_{3,\rm min}$ & M$_{\odot}$ & 0.016 & 0.0013 \\ $M_{3,\rm min}$ & M$_{\rm Jup}$ & 16.7 & 1.36 \\ $K$ & km/s & 0.25 & 0.025 \\ $A_{\rm dyn}$ & day & $3.4\cdot 10^{-8}$ & $2.8 \cdot 10^{-6}$ \\ \hline\noalign{\smallskip} \end{tabular} \end{center} \label{t3} \end{table} \section{Discussion} The discovery of the LITE also allows us to determine the stellar multiplicity in dwarf binary stars. The derived third-body parameters lead to the following equation for the mass function \citep{1990BAICz..41..231M}: \medskip \noindent $$ f(M) = \frac{M_3^3 \sin^3 i_3}{(M_1+M_2+M_3)^2} = \frac{1}{P^2_3} \, \left[ \frac {173.15 \, A} {\sqrt{1 - e_3^2 \cos^2 \omega_3}} \right] ^3, $$ \smallskip \noindent where $M_i$ are the masses of components. The systemic radial velocity of the eclipsing pair has an amplitude (in km/s) of $$ K = \frac{A}{P_3} \frac{5156}{\sqrt{\left(1-e^2_3\right)\,\left(1-e^2_3 \cos^2 \omega_3\right)}}. $$ \smallskip \noindent Assuming a coplanar orbit of the third component ($i_3 \sim 90^{\circ}$), we can obtain a lower limit for its mass $M_{3, \rm min}$. These quantities for the third body of individual systems are collected in Table~\ref{t3}. The amplitude of the dynamical contribution of the third body, $A_{\rm dyn}$, is given by \citep{2016MNRAS.455.4136B} $$ A_{\rm dyn} = \frac{1}{2\pi} \frac{M_3}{M_1+M_2+M_3} \frac{P_s^2}{P_3} \, \left(1-e^2_3\right)^{-3/2} $$ \smallskip \noindent and is also listed in Table~\ref{t3}. The value of $A_{\rm dyn}$ is of the order of tenths of a second and is less than an individual mid-eclipse time precision. Another possible mechanism for cyclical period variation frequently discussed in literature is a magnetic activity cycle for systems with a late-type secondary component \citep{1992ApJ...385..621A}. Recently, \cite{2018A&A...615A..81N} showed that the Applegate mechanism is energetically feasible in five PCEB systems. However, for N1425 they note that there are no solutions that could explain the eclipsing time variations entirely, but magnetic activity could at least induce relevant scatter in the observed variations. For N1425 and N782, we also used the publicly available Eclipsing time variation calculator\footnote{Applegate calculator: \\ \url{http://theory-starformation-group.cl/applegate/}.} based on the two-zone model by \cite{2016A&A...587A..34V}. For the updated parameters in Table~\ref{t2}, we find that the required energy to drive the Applegate mechanism is approximately $10^2 - 10^5$ times the available energy in the magnetically active secondary (solution for the finite-shell two-zone model). The newly derived LITE periods for selected objects ($\sim$ 10~years) are too short for the magnetic cycle, and this mechanism cannot contribute significantly to the observed period changes in these systems. In case of S1435 with a minute amplitude of variations, the above-mentioned calculator only gives a physical solution for the finite-shell constant density model ($\Delta E/E_{sec} \approx 10^7$). The long-term eclipse timings of white dwarf binaries with respect to a magnetic mechanism was presented by \cite{2016MNRAS.460.3873B}. They found that all binaries with baselines exceeding 10~years, with secondaries of spectral type K2 -- M5.5, show variations in the eclipse arrival times that in most cases amount to several minutes. They conclude that a still relatively short observational baseline for many of the binaries cannot yet provide obvious conclusions about the cause of orbital period variations in white dwarf binaries. The stability of circumbinary companions in post-common envelope eclipsing binaries was discussed by \cite{2018A&A...611A..48P}. They conclude that period variation cannot be modeled simply on the basis of a circumbinary object, thus a more complex processes may be taking place in some systems. In case S1435, we note the unseen third body orbiting the eclipsing pair with a mass of about 17 M$_{\rm Jup}$\ which is in transition from planet to brown dwarf, but is well-below the stellar-mass limit of 0.075 M$_{\odot}$. On the other hand, such an orbiting body could be confirmed spectroscopically using modern instruments connected to medium-sized telescopes. The derived amplitude of the systemic radial velocity of S1435 is about 250 m/s (see Table~\ref{t3}). Finally, in N782 we expect a giant planet of Jupiter mass. A similar abrupt change in the eclipse timings as observed in N1425 also occurred in the well-known PCEB system HS~0705+6700 \citep[][]{2018A&A...611A..48P}. This system underwent a sudden extension of the period in February 2015 (see their Figs. 3 and 4). Thus, the previous single third-body hypothesis can no longer be valid. For such systems, we can introduce a two-satellite model, in which one body is orbiting the eclipsing pair on a nearly circular orbit, whilst the second companion has a highly eccentric and long-period orbit and has just passed through the periastron. After this passage, the system should relax to previous state with nearly the same periodicity. On the other hand, two circumbinary brown dwarfs with orbital periods of about 8 and 13 years were recently proposed for HS~0705+6700 \citep{2020MNRAS.499.3071S}, with a relatively good fit of eclipse times, but in dynamically unstable configuration. \section{Conclusions} Our careful analysis of O-C diagrams has provided identification or confirmation of the two probable triple systems between known dwarf binaries, namely S1435 and N782. In both systems, the whole third-body orbital period is now measured by the reliable mid-eclipse times. The cyclic variations of the orbital period are explained by the LITE caused by a third body as the more probable scenario, most likely a brown dwarf or a giant planet with a mass of about 17~M$_{\rm Jup}$\ in S1435 or 1.4 M$_{\rm Jup}$\ in N782. In the case of N1425, the previous LITE solution supported by many investigators is not confirmed by current timings. We cannot approve a single additional body with the orbital period of about 8.8~years, as was last announced by \cite{2019RAA....19..134Z}. For this system, we propose an explanation using at least two additional bodies. A longer time span is required for an accurate multiple-satellite solution. Future observations of these interesting objects could offer a more concrete explanation for their period changes, which could be caused by a currently unknown or unexpected phenomenon connected with the internal structure of the components, an evolutionary effect, or circumbinary bodies. The sample of well-known PCEB or sdB binaries needs to be increased, and observations of additional systems would be very useful. \medskip \begin{acknowledgements} Useful suggestions and recommendation by an anonymous referee helped to improve the clarity of the paper and are greatly appreciated. M.W. was supported by the Czech Science Foundation grant GA19-01995S. The research of M.W. and P.Z. was also supported by the project Progress Q47 {\sc Physics} of the Charles University in Prague. H.K. and K.H. were supported by the project RVO: 67985815. The authors would also like to thank Lenka Kotkov\'a, Ond\v{r}ejov\ observatory, Jan Vra\v{s}til, Charles University in Prague, Ji\v{r}\'i Li\v{s}ka and Marek Chrastina, Masaryk University Brno, Ladislav \v{S}melcer, Vala\v{s}sk\'e Mezi\v{r}\'{\i}\v{c}\'{\i}\ observatory, Reinhold Friedrich Auer, S-M-O Veversk\'a B\'it\'y\v{s}ka\ observatory, all from the Czech Republic, for their important contribution to photometric observations. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program. The following internet-based resources were used in research for this paper: the SIMBAD database and the VizieR service operated at CDS, Strasbourg, France; the NASA's Astrophysics Data System Bibliographic Services. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research is part of an ongoing collaboration between professional astronomers and the Czech Astronomical Society, Variable Star and Exoplanet Section. \end{acknowledgements} \bibliographystyle{aa.bst}
2109.12376
\section{Introduction} Hot Jupiters --- planets with masses exceeding 0.1~\ensuremath{M_\mathrm{J}}\xspace and orbital periods shorter than 10 days --- are rare. The probability that a Sun-like star has a hot Jupiter has been variously estimated as 0.4\% -- 1.2\% \citep{Mayor2011,Wright2012,Fressin2013,Petigura2018,Masuda2017}. Despite this low probability, many of the earliest discovered planets were hot Jupiters, including 51\,Pegasi\,b \citep{Mayor1995}. Today, hot Jupiters constitute more than $10\%$ of the sample of confirmed exoplanets. This is because hot Jupiters are relatively easy to detect, producing strong radial-velocity (RV) signals and having conveniently short orbital periods. They also have high transit probabilities, and when they do transit, the fractional loss of light is relatively large. The discovery of hot Jupiters was unexpected because giant planets were supposed to form in orbits wider than a few AU, according to the prevailing core-accretion theory for giant planet formation (e.g. \citealt{Lissauer1993}, although see also \citealt{Struve1952}). There are three categories of responses to this theoretical challenge, as reviewed by \cite{Dawson2018} and \cite{Fortney2021}. One scenario is that hot Jupiters did form in very close orbits, i.e., a sufficiently massive core was able to form within the inner disk despite the low surface density of solids \citep{Rafikov2006,Lee2016,Batygin2016}. In the other two scenarios, giant planets form in wide orbits and then move inward. The inward migration might be caused by torques from the protoplanetary disk \citep{Goldreich1980,Lin1996,Baruteau2014}, or by eccentricity excitation followed by tidal circularization \citep{Rasio1996,Fabrycky2007}. It might be possible to figure out how often each of these processes takes place by examining the properties and statistics of hot Jupiters. For example, migration through a disk would lead to an inner boundary for the orbital distances of hot Jupiters corresponding to the inner edge of the disk, while for high-eccentricity migration the inner boundary should occur at roughly twice the Roche radius. Indeed, a possible pile-up of hot Jupiters at an orbital period of $\approx3$~days was first noted by \cite{Cumming1999} and \cite{Udry2003}. \cite{Nelson2017} examined the semimajor axis distribution of the hot Jupiters discovered by the HAT and WASP surveys, arguing that it showed evidence for multiple populations, with 85\% of hot Jupiters residing in a component consistent with having formed through high-eccentricity migration, while the remaining 15\% were consistent with disk migration. Other trends in the population of known hot Jupiters include an increasing occurrence rate with stellar metallicity \citep{Santos2004,Valenti2005}, and possibly also with stellar mass \citep{Johnson2010}, although \cite{Obermeier2016} argued that the current sample size is too small to draw a robust conclusion. When the planet radius/period distribution is examined for periods shorter than 10 days, there appears to be a deficit of planets with radii between about 2 and 9 Earth radii, with the lower limit increasing with orbital period. This is the so-called ``hot Neptune desert'' separating hot Jupiters from smaller and mainly rocky planets \citep{Szabo2011,Mazeh2016}. Despite these advances, our knowledge of the demographics of hot Jupiters remains fuzzy. Paradoxically, we have better knowledge of the demographics of smaller planets with periods ranging out to several months, because of the large and well-understood sample of thousands of such planets that resulted from NASA's Kepler mission \citep{Borucki+2016}. In contrast, the \textit{Kepler}\xspace sample includes only 40 confirmed and 19 candidate hot Jupiters orbiting FGK stars. Radial-velocity surveys have discovered 44 hot Jupiters, also a relatively small sample, and it is difficult to interpret the results because different surveys used different selection procedures, such as a preference for high-metallicity stars, and different detection biases. Two of the most statistically well-understood radial-velocity samples are those analyzed by \cite{Cumming2008} and \cite{Rosenthal+2021}, which contained only 12 and 14 hot Jupiters, respectively. More than three quarters of the total observed sample of $\approx$500 hot Jupiters comes from a heterogeneous collection of wide-field ground-based photometric surveys such as WASP \citep{Pollacco2006}, HAT \citep{Bakos2004,Bakos2013}, KELT \citep{Pepper2007}, and NGTS \citep{Wheatley2018}. While these surveys have been very successful in detecting hot Jupiters and have made important contributions to the field, they suffered from complex and severe biases due to irregular data sampling, non-stationary noise properties, limited knowledge of the properties of the stars that were searched, and inability to perform the necessary follow-up observations of many transit candidates. Thus, despite the large sample size, it has proven difficult to use the results of the wide-field surveys for demographics. The \textit{Kepler}\xspace sample of hot Jupiters is probably the most homogeneous sample that is currently available. The \textit{Kepler}\xspace telescope observed about $2 \times 10^5$ stars for 4 years, with a sensitivity high enough to be nearly 100\% complete for hot Jupiters. The results from \textit{Kepler}\xspace provided many new insights, such as a clearer view of the connection between radius inflation and stellar irradiation \citep{Demory2011}, constraints on the frequency of nearby companion planets \citep{Steffen2012}, and the finding that the period pile-up is only evident when the sample is restricted to metal-rich stars \citep{Dawson2013}. New questions were also raised, such as the reason for the apparent factor-of-two discrepancy between the occurrence rate of hot Jupiters from different surveys \citep{Wright2012}. However, as noted above, the \textit{Kepler}\xspace sample includes only 40 confirmed and 19 candidate hot Jupiters orbiting FGK stars, limiting the power of statistical studies. The work reported in this paper was undertaken to gauge the completeness of the existing samples of hot Jupiters in the solar neighborhood, and estimate the number and stellar host properties of the hot Jupiters that would need to be detected in order to enlarge the statistically useful sample from 40 to 400. We focused on transit detection, rather than radial-velocity detection, to take advantage of the homogeneity and completeness of the \textit{Kepler}\xspace sample of transiting planets and to set expectations for the large number of hot Jupiters that are detectable using data from the NASA Transiting Exoplanet Survey Satellite (TESS) mission \citep{Ricker+2015}. Prior work by \cite{Beatty2008} had a similar goal, to predict the number of transiting hot Jupiters that would be discovered in wide-field transit surveys as a function of magnitude and galactic latitude. Our work incorporates the knowledge we have gained since then about hot Jupiters and their properties. We used the \textit{Kepler}\xspace sample to estimate the number of transiting hot Jupiters we should expect in a magnitude-limited sample of FGK stars (Section \ref{sec:method}), and compared this to the known sample (Section \ref{sec:results}). Because our results suggested that the current sample is reasonably complete down to a Gaia magnitude of 10.5, we took the opportunity to compare this subsample of hot Jupiters with the independent {\it Kepler} sample (Section \ref{sec:discussion}) and assess the level of agreement in their observed properties. \section{Completeness of the Sample of Nearby Transiting Hot Jupiters \label{sec:method}} To gauge the completeness of the current sample of transiting hot Jupiters (HJ), we wanted to estimate how many would have been detected in a magnitude-limited survey of nearby FGK stars. To set expectations for more complicated models, let us start with a simple model. In an idealized magnitude-limited transit survey of a population of identical Sun-like stars isotropically and uniformly distributed in space, the expected total number of detections is \begin{equation} \label{eq:simple} N(m_{\rm lim}) = nfp_{\rm tra} \times \frac{4\pi}{3} d_{\rm ref}^3 \times 10^{0.6(m_{\rm lim} - m_{\rm ref})}, \end{equation} where $m_{\rm lim}$ is the limiting apparent magnitude, $n$ is the number density of stars, $f$ is the fraction of stars with hot Jupiters, $p_{\rm tra}$ is the average geometric transit probability, and $m_{\rm ref}$ is the apparent magnitude of a Sun-like star at an arbitrarily chosen reference distance $d_{\rm ref}$. Based on the Gaia Catalog of Nearby Stars \citep{GCNS}, a reasonable estimate for the number density of stars with absolute magnitudes between 3.5 and 6.5 (roughly the spectral types F6 through K4) is 0.007 stars/pc$^3$. For $n$, we adopt a hot Jupiter occurrence rate of 0.6\% based on the analysis by \cite{Petigura2018} of the \textit{Kepler}\xspace sample. A reasonable estimate of $p_{\rm tra}$ is 0.1, corresponding to an orbital distance of about 0.05 AU around a Sun-like star. Further choosing $d_{\rm ref} = 116$\,pc and $m_{\rm ref} = 10$ as appropriate for the Sun, we obtain \begin{equation} N(m_{\rm lim}) \approx 25 \times 10^{0.6(m_{\rm lim} - 10)}. \end{equation} The current sample of hot Jupiters includes 19 with host stars brighter than $G=10$, suggesting that it may be $\approx$\,75\% complete down to that magnitude. For $m_{\rm lim} = 8$, this formula predicts $N\approx 1.6$, and there are 2 known hot Jupiters (HD\,209458b and KELT-11b) with host stars with $G<8$ and spectral types between F6 and K4. This formula also predicts that to obtain a magnitude-limited sample of 400 hot Jupiters, the required limiting magnitude is 12.0. We wanted to go beyond this crude approximation in order to: \begin{enumerate} \item Take into account the uncertainties arising from Poisson fluctuations and the uncertainty in the hot Jupiter occcurence rate. \item Investigate completeness as a function of galactic latitude, to see if the practical problems associated with detecting and confirming planets in crowded star fields have led to lower completeness near the galactic plane. \item Account for stars of different stellar types and possible variation in hot Jupiter occurrence rate with stellar mass. \item Use the statistics of \textit{Kepler}\xspace transit detections directly, instead of relying on an inferred occurrence rate and a typical transit probability. \end{enumerate} The latter goal is complicated by the fact that \textit{Kepler}\xspace was not a magnitude-limited survey. The stars for which data are available were selected based on various criteria related to planet detectability, which depended on stellar effective temperature, radius, and the surface density of nearby stars on the sky \citep{Batalha2010}. Our chosen method builds on similar work by \cite{Masuda2017}, who calculated the expected number of detections of transiting hot Jupiters in the globular cluster 47 Tucanae. In short, we constructed a sample of \textit{Kepler}\xspace stars for which any transiting hot Jupiters would have been detected. We then used the \textit{Gaia}\xspace catalog to construct a magnitude-limited sample of stars spanning the same range of colors and luminosities as the \textit{Kepler}\xspace sample --- the ``local'' sample --- and matched each local star with a star of similar color and luminosity in the \textit{Kepler}\xspace sample. Whenever a local star was matched to a \textit{Kepler}\xspace star that hosts a transiting hot Jupiter, we assigned the local star a planet with the same properties. We then counted the total number of transiting planets in this ``matched'' catalog, and compared it to the number of hot Jupiters actually detected in the ``local'' sample. By repeating this process many times, we derived the statistical uncertainty in this estimate arising from Poisson fluctuations and the limited number of hot Jupiters detected by \textit{Kepler}\xspace. The underlying premise of this method is that planet occurrence in the solar neighborhood is the same as in the \textit{Kepler}\xspace sample. As we noted in the introduction, \textit{Kepler}\xspace hot Jupiter statistics appear to differ from those found by radial-velocity surveys by 2--3$\sigma$, which may be due to differences in the underlying stellar distribution. Our matching procedure attempts to correct for differences in stellar population in color-magnitude space, but not directly for other factors that may affect the hot Jupiter occurrence rate, such as stellar metallicity, multiplicity, and age. We discuss the validity of our assumptions later in Section \ref{ssec:prev_occurrence}. In the rest of this section, we outline our data selection and matching procedure in greater detail. \subsection{Target Selection \label{ssec:selection}} \begin{figure*} \epsscale{1.15} \plotone{cmd_selection.png} \caption{\label{fig:cmd} Color-magnitude diagrams for the stars in the \textit{Kepler}\xspace sample (left) and the \textit{Gaia}\xspace magnitude-limited ($G < 12.5$) sample (right). The black boundary encloses the stars considered in our calculations, as described in Section \ref{ssec:selection}. All magnitudes and colors have been corrected for extinction. Red points indicate the hosts of confirmed and candidate hot Jupiters observed by \textit{Kepler}\xspace, and confirmed hot Jupiters in the magnitude-limited sample. } \end{figure*} \subsubsection{Planet Properties} The occurrence rates reported in the literature are sometimes based on different definitions of hot Jupiters. For our work, we consider a planet to be a hot Jupiter when it has a radius between 0.8 and 2.5 times that of Jupiter, and an orbital period shorter than 10 days. Using this criterion, we found $\approx$\,500 confirmed and candidate hot Jupiters when querying the NASA Exoplanet Archive\footnote{\url{https://doi.org/10.26133/NEA12}}; however, not all of these planets orbit main-sequence FGK stars. Below, we describe our stellar selection process. \subsubsection{Kepler Stars} To select the stars in the \textit{Kepler}\xspace sample, we used the Gaia-Kepler Stellar Properties Catalog \citep{Berger2020}, a homogeneous set of stellar properties derived using an isochrone analysis with \textit{Gaia}\xspace parallaxes and broadband photometry. This catalog includes almost all of the $\approx$\,200{,}000 stars observed by \textit{Kepler}\xspace, subject to cuts based on the quality of parallax measurements and photometry from the 2MASS survey to exclude nearly equal-brightness binaries. We obtained \textit{Gaia}\xspace EDR3 photometry \citep{GaiaEDR3,GaiaEDR3Photometry} for all the members of the catalog. To correct the \textit{Gaia}\xspace photometry for reddening and extinction, we used a standard extinction law $R_V = 3.1$ and the extinction coefficients for \textit{Gaia}\xspace filters derived by \cite{Casagrande2018} (Table 2), together with the $A_V$ extinctions derived by \cite{Berger2020}. Figure \ref{fig:cmd} shows the extinction-corrected \ensuremath{M_\mathrm{G}}\xspace, \ensuremath{G_\mathrm{BP} - G_\mathrm{RP}}\xspace color-magnitude diagram for this sample. We then selected FGK main sequence stars using synthetic photometry from the MESA Isochrones and Stellar Tracks \citep[MIST;][]{Choi2016,Dotter2016}. We defined a region in the \textit{Gaia}\xspace color-magnitude diagram bounded by the zero-age main sequence (ZAMS) and terminal-age main sequence (TAMS) isochrones in MIST for stars with masses between 0.7 and 1.2~\ensuremath{M_\odot}\xspace. We used isochrones spanning initial [Fe/H] metallicities from $-0.2$ to $+0.2$, and used the union of the regions bounded by these isochrones as our final stellar selection criterion. This cut on stellar colors and absolute magnitudes led to a sample of 112{,}203 \textit{Kepler}\xspace targets. We also wanted to restrict our \textit{Kepler}\xspace sample to stars around which any transiting hot Jupiters would have been detected. For each star we calculated the Multiple Event Statistic (MES), \begin{equation} \label{eq:mes} \mathrm{MES} = \sqrt{\frac{T_\mathrm{obs}}{P_\mathrm{orb}}} \left( \frac{R_p}{R_\star} \right)^2 \frac{1}{\sigma_\mathrm{CDPP}(T_\mathrm{tra})}, \end{equation} where $T_{\rm obs}$ is the total timespan of observations for a given star, $P_{\rm orb}$ is the orbital period, $R_p$ is the planetary radius, $R_\star$ is the stellar radius, and $\sigma_\mathrm{CDPP}$ is the robust root-mean-squared Combined Differential Photometric Precision \citep{Christiansen2012} on a timescale equal to the maximum possible duration of a transit for a planet on a circular orbit, \begin{equation} \label{eq:transit_duration} T_\mathrm{tra,\,max} \approx 13\,\mathrm{hr}\left(\frac{P_\mathrm{orb}}{1\,\mathrm{yr}}\right)^{1/3} \left(\frac{\rho_\star}{\rho_\odot}\right)^{-1/3}, \end{equation} where $\rho_\star$ is the mean density of the star. The final version of the NASA \textit{Kepler}\xspace pipeline used a minimum MES threshold of $7.1$ to identify Threshold Crossing Events \citep{Twicken2016,Thompson2018}. We computed the MES for each \textit{Kepler}\xspace target assuming $R_p=0.8\,R_{\rm Jup}$ and $P=10$\,days, the most difficult hot Jupiter to detect according to our definition of hot Jupiters. We used the stellar properties of \cite{Berger2020} and set $T_{\rm obs} = (\pi/4)\,T_{\rm tra,\,max}$ to account for averaging over possible transit impact parameters. The value of $\sigma_\mathrm{CDPP}$ was computed for various fixed timescales based on \textit{Kepler}\xspace DR25 \citep{Twicken2016}; we used linear interpolation to compute $\sigma_\mathrm{CDPP}$ for our desired transit duration. Given the calculated MES values for the detection of a hot Jupiter around each target, we excluded those for which MES is less than 7.1. We also excluded stars for which the observation duration $T_\mathrm{obs}$ was less than 30~days, given that at least three transits needed to be observed for a secure detection. Only 13 of the stars in our \textit{Kepler}\xspace sample were excluded by these criteria. Increasing the MES threshold from 7.1 to $10$ or $17$ changed the number of stars in our final sample by less than one percent. Thus, this exercise served to confirm that \textit{Kepler}\xspace could have detected a transiting hot Jupiter around essentially all of the stars it observed during its 4-year primary mission. This reinforces our notion that the hot Jupiters detected by \textit{Kepler}\xspace represent the most complete and well-understood sample of such planets currently available. Around the stars meeting our selection criteria, \textit{Kepler}\xspace detected a total of 40 confirmed hot Jupiters, and 19 candidate transit signals for which the reported light-curve properties are consistent with hot Jupiters. We excluded planets with grazing transits (impact parameters $>$\,0.9) because of their lower detectability, larger uncertainties in planet radius and other parameters, and higher likelihood of being false positives. This left us with a sample of 36 confirmed and 6 candidate hot Jupiters. Each of the 6 candidates was assigned a False Positive Probability (FPP) by \cite{Morton2016}. By assigning each of the confirmed planets a weight of 1.0, and each of the candidates a weight of $1 - $~FPP, we arrived at an effective sample size of 41 transiting hot Jupiters drawn from a sample of 112{,}203 stars. We will refer to this \textit{Kepler}\xspace sample by the symbol $\mathcal{S}_K$. \subsubsection{Magnitude-Limited Sample} We then constructed a magnitude-limited sample using data from \textit{Gaia}\xspace EDR3 \citep{GaiaEDR3}. We queried the \textit{Gaia}\xspace archive for all stars brighter than $G < 12.5$ to obtain the photometric and astrometric observations, using a standard quality cut on the parallax ($\varpi / \sigma_\varpi > 5$) to remove suspect data. We also used the geometric distances from \cite{Bailer-Jones2021} to compute absolute $G$-band magnitudes. Although most of these bright stars are relatively nearby and do not suffer significant dust extinction, we nonetheless corrected their \textit{Gaia}\xspace $G$, $G_\mathrm{BP}$, and $G_\mathrm{RP}$ magnitudes using the \texttt{mwdust} Python package \citep{Bovy2016}. In particular, we used the \texttt{Combined19} dust map, which combines the maps from \cite{Green2019}, \cite{Marshall2006}, and \cite{Drimmel2003} to provide full sky coverage. The majority of our stellar sample received only small corrections, with $90\%$ of stars having $E(G_\mathrm{BP} - G_\mathrm{RP}) < 0.15$. To select main sequence FGK stars, we applied the same cut in the extinction-corrected color-magnitude diagram (\ensuremath{M_\mathrm{G}}\xspace versus \ensuremath{G_\mathrm{BP} - G_\mathrm{RP}}\xspace) as we did for the \textit{Kepler}\xspace sample. We did not make any further cuts on astrometric fit quality, out of concern that hot Jupiters may be preferentially associated with stars with wide-orbiting companions \citep[e.g.,][]{Ngo2016} which would affect the quality of the \textit{Gaia}\xspace astrometric fits (\citealt{Belokurov2020}, although see also \citealt{Moe2020}). This procedure yielded 1{,}073{,}225 stars in our magnitude-limited ``local'' sample, which we denote by the symbol $\mathcal{S}$. According to the NASA Exoplanet Archive, there are 154 transiting hot Jupiters known to exist around the stars in this sample. The right panel of Figure \ref{fig:cmd} shows the color-magnitude diagram for the stars and hot Jupiters in this sample. The two stellar populations are different: \textit{Kepler}\xspace target stars were chosen to maximize the number of small planets that could be detected \citep{Batalha2010}, leading to a dominance by G-dwarfs; meanwhile, the Gaia magnitude-limited sample is dominated by early F stars because they are more luminous and can be seen to a greater distance at fixed apparent magnitude (Malmquist bias). If the hot Jupiter occurrence rate varies according to stellar type, then our matching procedure should account for these differences. This was the motivation for the process described in the following section. \subsection{Matching Procedure \label{ssec:matching}} \begin{figure} \epsscale{1.15} \plotone{matched_histograms} \caption{\label{fig:matched_histogram} Absolute magnitude and color distributions of the local sample $\mathcal{S}$, the \textit{Kepler}\xspace sample $\mathcal{S}_K$, and an example of a matched sample. Our nearest-neighbors matching procedure ensures that the matched sample has stellar properties similar to the stars in the local sample. } \end{figure} We performed our matching procedure to generate a synthetic catalog of transiting hot Jupiters around the stars in $\mathcal{S}$. First, we drew stars, with replacement, from $\mathcal{S}_K$, to generate a new sample $\tilde{\mathcal{S}}_K$ that has the same number of members as $\mathcal{S}$. This step accounted for the Poisson fluctuations in the number of planets in the \textit{Kepler}\xspace sample. We then associated each star in $\mathcal{S}$ with a star in the resampled set, $\tilde{\mathcal{S}}_K$. To account for the differences in the underlying stellar populations of the two samples, and any possible variation in hot Jupiter occurrence with stellar type, we wanted to match the stars in $\mathcal{S}$ with stars of similar spectral types in $\tilde{\mathcal{S}}_K$. We defined a metric in color-magnitude space, \begin{equation} d = \sqrt{ \left( \frac{\Delta{\rm mag}}{\sigma_{\rm mag}} \right)^2 + \left( \frac{\Delta{\rm color}}{\sigma_{\rm color}} \right)^2 }, \end{equation} where $\Delta$mag is the difference in absolute magnitude, $\Delta$color is the difference in the \ensuremath{G_\mathrm{BP} - G_\mathrm{RP}}\xspace color, and $\sigma_{\rm mag}$ and $\sigma_{\rm color}$ are the standard deviations of the distributions of absolute magnitude and color of the entire sample. Then, for each star in $\mathcal{S}$, we drew a proposed match in $\tilde{\mathcal{S}}_K$, and accepted this proposal with probability $P(d) \propto e^{-3d}$. Whenever a proposed match was rejected, we proposed a new matching star from $\tilde{\mathcal{S}}_K$ until all stars in $\mathcal{S}$ were successfully matched. The choice of 3 as the factor in the exponent of $P(d)$ was made after some experimentation to balance two effects: if set too small, the resulting distribution would not resemble the target distribution; too high, and the sparsity of hot Jupiter hosts in the \textit{Kepler}\xspace sample would mean some stars in $\mathcal{S}$ would never be matched with a planet-hosting star in $\mathcal{S}_K$. We arrived at 3 by computing the distances between each \textit{Kepler}\xspace hot Jupiter hosts and its five closest counterparts, and fitting the resulting distribution of distances with an exponential distribution. In this way, all stars in $\mathcal{S}$ had a nonzero probability of being matched with a hot Jupiter host in $\mathcal{S}_K$. This process generated a ``matched'' catalog of stars, denoted $\mathcal{S}_M$, which is comprised of real \textit{Kepler}\xspace stars with a distribution in color-magnitude space that is similar to that of the local magnitude-limited sample. Figure $\ref{fig:matched_histogram}$ compares a single realization of a matched catalog to the distribution of stars in the Gaia sample $\mathcal{S}$, illustrating the desired agreement in stellar properties. We then assigned each transiting hot Jupiter around a star in $\mathcal{S}_M$ to its matched counterpart in $\mathcal{S}$. Given that each star is only matched to a similar star, any corrections for changes in transit probability due to differing stellar densities are minimized, and were neglected in our subsequent calculations. The end result was a synthetic catalog of transiting hot Jupiters around a magnitude-limited sample of nearby stars from \textit{Gaia}\xspace, based on their occurrence statistics in the \textit{Kepler}\xspace sample. We repeated this process to generate 1000 realizations of this catalog in order to estimate sampling uncertainties in the results. \section{Results} \label{sec:results} \begin{figure} \epsscale{1.15} \plotone{hj_mag_distribution_cumulative} \caption{\label{fig:hj_mag_dist} {\bf Top}: Cumulative number of non-grazing transiting hot Jupiters as a function of limiting apparent magnitude, based on simulated \textit{Kepler}\xspace-matched samples of nearby stars (blue) and the actual collection of confirmed hot Jupiters around the same stars (orange). Error bars represent sampling uncertainties. Shaded blue regions represent the expected contribution of undiscovered hot Jupiters located within 10$^\circ$ of the galactic plane. Based on the simulations, a complete survey of $G < 12.5$ FGK stars should contain $\approx$\,400 transiting hot Jupiters. \\ {\bf Bottom}: Estimated completeness of surveys for nearby hot Jupiters. The black histogram is for all stars, and the red histogram is for stars more than 10$^\circ$ from the galactic plane. Based on this comparison, the known sample of hot Jupiters is about 75\% complete down to $G < 10.5$, falling to $50\%$ down to $G < 12.5$. } \end{figure} The results from this matching process are shown in Figure \ref{fig:hj_mag_dist}. Based on these results, we expect that a complete survey of nearby FGK stars brighter than $G < 12.5$ will contain $424^{+98}_{-83}$ transiting hot Jupiters with impact parameters $b < 0.9$ --- assuming that the \textit{Kepler}\xspace hot Jupiters are representative of the nearby population of hot Jupiters. For comparison, 154 hot Jupiters have actually been confirmed around such stars, leaving several hundred more hot Jupiters undiscovered around nearby stars. Figure \ref{fig:hj_mag_dist} also shows that the apparent magnitude distribution of known hot Jupiter hosts is consistent with being 75\% complete down to a limiting \textit{Gaia}\xspace $G$-band magnitude of 10.5. For $G<12.5$, the estimated completeness falls to $36^{+8}_{-7}\%$. Recall that the simpler calculation presented at the beginning of Section \ref{sec:method} predicted that 400 hot Jupiters could be found down to a limiting magnitude of $G < 12$. Our sample-matching procedure suggests that the actual limiting magnitude needs to be 12.4 to reasonably expect to detect 400 transiting hot Jupiters. The differences in the results are due to the simplifying assumptions that were made in the earlier calculation as well as our restriction in the sample-matching procedure that the planets need to show non-grazing transits. We can correct for the latter effect by multiplying the expected number of hot Jupiters in our simulated samples by a factor of $1/0.9$, as would be appropriate for isotropically oriented planetary orbits. This simple correction leads to an expectation of $472^{+110}_{-92}$ transiting hot Jupiters for an apparent magnitude limit of $G=12.5$. \begin{figure} \epsscale{1.15} \plotone{hj_glat_distribution} \caption{Galactic latitude distribution of the expected number of hot Jupiters (blue), and the currently known hot Jupiters (orange), for two different apparent magnitude limits: $G=12.5$ (top) and $G=10.5$ (bottom). Ground-based transit surveys have tended to avoid the galactic plane due to the difficulty of achieving precise photometry in crowded star fields. \label{fig:glat_distribution}} \end{figure} \begin{figure} \epsscale{1.15} \plotone{hj_cont_ratio} \caption{The fraction of hot Jupiter hosts with flux contamination ratios (a measure of the crowdedness of the star field) larger than the value of the x-axis, for the stars in our synthetic catalogs (blue) and confirmed hot Jupiter hosts (orange), in both cases with an apparent magnitude limit of $G=12.5$. The confirmed hot Jupiter hosts are biased toward lower flux contamination ratios and lower crowdedness. If we exclude stars close to the galactic plane (green line), the distribution for the synthetic catalogs is more similar to the confirmed population. \label{fig:cont_ratio}} \end{figure} One of the simplifying assumptions that was made in the basic calculation leading to Equation~\ref{eq:simple} was that stars are distributed isotropically in space around the Sun. In reality, the number density of stars varies strongly as a function of galactic latitude, and the number density of stars also affects the detectability of transiting planets. The crowded star fields near the galactic plane lead to various practical difficulties. Having multiple stars within a photometric aperture reduces the amplitude of the transit signal and increases the noise level due to Poisson fluctuations and variations in the point-spread function. A separate issue is that the incidence of ``false positive'' signals is higher in crowded fields, due to the higher density of background eclipsing binaries whose eclipses can be mistaken for planetary transits. Furthermore, the radial-velocity observations and other spectroscopic follow-up observations that are necessary for planet confirmation are made more difficult in the presence of nearby bright stars. Figure \ref{fig:glat_distribution} shows that for a limiting magnitude of 10.5, most of the ``missing'' hot Jupiters are probably to be found within $10^\circ$ of the galactic plane. This strip at low galactic latitudes subtends 17\% of the sky and is home to $21\pm 2\%$ of hot Jupiters in our simulated samples, but it only contains 6.5\% (10 out of 154) of the sample of known hot Jupiters. If we restrict ourselves only to stars with $|b| > 10^\circ$, then the estimated completeness of the $G < 10.5$ magnitude-limited sample of hot Jupiters rises to $99^{+41}_{-33}\%$. Our simulations also suggest that there are a handful (1--4) of hot Jupiters orbiting stars brighter than 10th magnitude waiting to be discovered by those without fear of treading within the galactic plane. To explore the practical effects of crowdedness on finding and confirming planets, we examined the TESS Candidate Target List \citep[CTL;][]{Stassun2018,Stassun2019}. This catalog includes for each star an estimate of the ``flux contamination ratio,'' defined as the estimated fraction of the total flux within 3.5$'$ (10 TESS pixels) that comes from neighboring stars. While the search radius of 3.5$'$ is larger than the typical size of photometric apertures used in ground-based surveys, we expect the TESS flux contamination ratios are representative of the degree of crowding. We also note that the CTL flux contamination ratio was only computed based on nearby stars resolved by Gaia DR2, and so will not include the dilution from companions closer than $\lesssim 1"$. Figure \ref{fig:cont_ratio} shows the distribution of flux contamination ratios of the known hot Jupiter hosts and for our synthetic catalogs with an apparent magnitude limit of 12.5. As expected, the comparison shows that the known hot Jupiter population is biased toward lower flux contamination ratios (reduced crowding). While $< 5\%$ of confirmed hot Jupiter hosts have contamination ratios of 0.5 and greater, almost three times that fraction of hosts in our synthetic catalogs have such large ratios. When we exclude the synthetic catalog stars close to the galactic plane ($|b| < 10^{\circ}$), their distribution more closely matches that of the confirmed planets. Nonetheless, most stars have relatively low flux contamination of $< 0.2$, which seems too small for the signal dilution and increased Poisson noise to cause much damage to hot Jupiter detection. Instead, we suspect that the main factors are the higher incidence of false positives and the difficulty of follow-up observations. Assuming that these practical problems are not easily solved, and therefore excluding all stars within 10$^\circ$ of the galactic plane from all the samples, we estimate that a complete sample of FGK stars with $G < 12.5$ would contain $334^{+82}_{-70}$ transiting hot Jupiters, as compared to the 144 known hot Jupiters in the currently known sample. The earlier work of \cite{Beatty2008} presented an analytic framework for predicting the yield of transit surveys, including the effects of galactic structure and the window function for ground-based surveys. They estimated $\approx 80$ HJs with orbital period $P < 5$~days would be found in a magnitude-limited survey of Sun-like stars down to $V \leq 12$. Indeed, when we restrict our synthetic catalogs to $P < 5$~days and $V \leq 12$, we obtain an expected yield of $89^{+19}_{-18}$ planets. This good agreement is in part due to \cite{Beatty2008}'s use of occurrence rates from the OGLE-III survey \citep{Gould2006}, which found a hot Jupiter occurrence rate of $0.45\%$, similar to the low occurrence rate found by \textit{Kepler}\xspace (see Section \ref{ssec:prev_occurrence}). Our work differs from their analytic study by performing a numerical matching of stars in the all-sky and \textit{Kepler}\xspace catalogs, accounting for possible variation of occurrence rates with stellar and planet properties, and producing synthetic catalogs of transiting planets. \section{Discussion \label{sec:discussion}} \subsection{Comparison of Possibly Complete Samples \label{ssec:complete_samples}} \begin{figure} \epsscale{1.15} \plotone{hj_period_dist} \plotone{hj_rad_dist} \caption{Period and radius distributions of transiting hot Jupiters in the \textit{Kepler}\xspace (blue) and magnitude-limited sample (orange). The two populations appear to have differing properties, but this discrepancy is reduced when we resample the transiting HJ population to match the stellar distribution of \textit{Kepler}\xspace targets (green). The error bars on the green histogram reflect the 1-$\sigma$ widths of the distribution based on 1000 resampled catalogs. \label{fig:period_rad_comparison}} \end{figure} \begin{figure} \epsscale{1.15} \plotone{hj_irradiation} \caption{Radius versus stellar irradiation flux, for the hot Jupiters in the \textit{Kepler}\xspace and magnitude-limited samples. The planets in the magnitude-limited sample tend to orbit hotter stars and have larger sizes than the planets in the \textit{Kepler}\xspace sample. \label{fig:hj_insolation}} \end{figure} Based on the comparison between the apparent magnitude distribution of the hosts of known hot Jupiters and the distribution one would expect based on the incidence of transiting hot Jupiters in the \textit{Kepler}\xspace survey, we are currently aware of about 40\% of the hot Jupiters with host stars brighter than $G=12.5$. However, for a magnitude limit of $G = 10.5$, the total number of known hot Jupiters is consistent with the expected number, once we exclude the region of the sky within 10$^\circ$ of the galactic plane. Our \textit{Kepler}\xspace-matching procedure predicts $33\pm7$ transiting hot Jupiters, in agreement with the 33 transiting hot Jupiters that have been found. This suggests that we might already be in possession of a nearly complete magnitude-limited sample of hot Jupiters down to $G=10.5$. Thus, we took the opportunity to compare this ``possibly complete'' subset of hot Jupiters --- which should be less biased in planet properties than the known hot Jupiter population as a whole --- with the sample of 42 \textit{Kepler}\xspace hot Jupiters. Figure \ref{fig:period_rad_comparison} compares the period and radius distributions of both of these real samples of hot Jupiters. There are some hints of possible differences in these distributions, despite the fact that we expect both samples to be nearly complete. The period distribution of \textit{Kepler}\xspace hot Jupiters exhibits a pile-up at $\approx\,3$~days, as first noted by \cite{Latham2011} in an early catalog of Kepler Objects of Interest. Any such peak does not seem as pronounced in the magnitude-limited sample of hot Jupiters. However, visual inspection can be misleading. A two-sided Kolmogorov-Smirnov (K-S; \citealt{Kolmogorov1933,Smirnov1948}) test cannot reject the hypothesis that the two distributions are drawn from the same distribution ($p=0.3$). The relatively small number of planets in both samples (33 and 42 for the magnitude-limited and \textit{Kepler}\xspace samples respectively) limits the statistical power of the comparison. We also compared the radius distributions of the two populations, and this time, the K-S test does reject the null hypothesis ($p= 2\times10^{-5}$). The magnitude-limited sample contains a higher proportion of planets larger than 1.2\,$\Rjup$ than the \textit{Kepler}\xspace sample. At least part of this discrepancy may arise from the different distributions of stellar types within the two samples. The \textit{Kepler}\xspace sample has a higher fraction of G-type stars and a lower fraction of F-type stars than the the magnitude-limited sample (Figure \ref{fig:cmd}). Thus, the hot Jupiters in the magnitude-limited sample are subject to higher levels of stellar irradiation (Figure \ref{fig:hj_insolation}), which has been shown to be associated with larger planetary radii \citep[see, e.g.,][]{Fortney2007,Demory2011}. To investigate if this is indeed the case, we performed a resampling procedure to try and correct for the differences in stellar properties. Our source population was the 154 hot Jupiters in the $G < 12.5$ sample described in Section \ref{ssec:selection}. From this source population, we randomly drew planets to create sets of 41 hot Jupiter hosts (the same number as in the $G < 10.5$ sample) using a rejection sampling procedure, wherein planet hosts were accepted with a probability proportional to the density of \textit{Kepler}\xspace targets in color-magnitude space. We generated 1000 resampled catalogs of HJ host stars, each of which had a distribution in color-magnitude space that was statistically indistinguishable from that of the \textit{Kepler}\xspace hot Jupiter hosts. Figure \ref{fig:period_rad_comparison} shows that the properties of the resampled catalogs resemble those of the \textit{Kepler}\xspace sample more closely than the actual magnitude-limited sample. K-S tests comparing the radius distribution of the resampled catalog with the \textit{Kepler}\xspace HJs gave a median $p$-value of 0.13, indicating that the null hypothesis of identical parent distributions can no longer be rejected. This suggests that the apparent difference in the planet radii of the $G < 10.5$ and \textit{Kepler}\xspace samples, as discussed above, are at least in part due to differences in the distribution of host star properties. Nonetheless, these results must be interpreted with caution, due to the small sizes of the two samples. Our resampling process showed that due to the small number of planets, features in the period and radius distributions cannot be identified conclusively, especially when considering possible differences in stellar type. Furthermore, our claim that the $G < 10.5$ magnitude-limited sample is ``possibly complete'' is based only on the total number of detections as a function of apparent magnitude. To check if this is really the case, one would need to understand and model the selection function for the surveys for nearby transiting hot Jupiters, a much larger effort which we did not attempt. \subsection{Comparison with Previous Occurrence Rates \label{ssec:prev_occurrence}} As a check on the matching procedure described in Section \ref{sec:method}, we used this same procedure to estimate the occurrence rate of hot Jupiters in the \textit{Kepler}\xspace sample. We did so by simply matching $\mathcal{S}_K$ with itself. This yielded many realizations of catalogs of transiting hot Jupiters that we used to estimate the sampling uncertainty. To find the total occurrence rate of hot Jupiters, rather than the rate of transiting hot Jupiters, we weighted each transiting planet by the product of $\Rstar/a$ (the inverse transit probability) and the factor $1/0.9$ that accounts for our restriction on the transit impact parameter. The result was an estimated occurrence rate of $0.38 \pm 0.06\%$. This is in line with the estimate of $0.43^{+0.07}_{-0.06}\%$ by \cite{Masuda2017}, who used a similar procedure. Other studies of the \textit{Kepler}\xspace hot Jupiter population have also arrived at similar results. \cite{Santerne2016} found an occurrence rate of $0.47\pm0.08\%$, and \cite{Petigura2018} found a rate of $0.57^{+0.14}_{-0.12}\%$ based on a more limited subset of the \textit{Kepler}\xspace stars. These \textit{Kepler}\xspace occurrence rates are a factor of $\sim$2 lower than those derived from other studies, albeit with modest statistical significance. \cite{Cumming2008} found a hot Jupiter occurrence rate of $1.5 \pm 0.6$\% from the Keck radial-velocity surveys, although the parent stellar sample was not constructed blindly with regard to planet host status. \cite{Wright2012} used a cleaner sample from the Lick and Keck planet searches, finding an occurrence rate of $1.2 \pm 0.4\%$. This is consistent with the result of $0.89\pm0.36\%$ reported by \cite{Mayor2011} based on the HARPS and CORALIE radial-velocity searches. This discrepancy is not limited to comparisons between transit and radial-velocity surveys. Based on the CoRoT transit survey, \cite{Deleuil2018} reported a hot Jupiter occurrence rate of $0.98\pm0.26\%$, higher than the \textit{Kepler}\xspace results and consistent with the radial-velocity results. Many authors have investigated possible reasons for the discrepancies in occurrence rates. \begin{itemize} \item \cite{Wang2015} argued that $12.5\pm0.2\%$ of hot Jupiters in the \textit{Kepler}\xspace sample were likely misidentified as smaller planets due to flux contamination by nearby stars or errors in the estimated stellar radius, but this would be insufficient to account for a factor-of-two difference in measured occurrence rates. \item \cite{Guo2017} investigated whether the strong association between stellar metallicity and hot Jupiter occurrence could be responsible for the discrepancies. Through spectroscopic observations, they found that \textit{Kepler}\xspace stars are more metal-poor by about $0.04$~dex than the stars in the Lick and Keck radial-velocity surveys, whereas a full resolution of the discrepancy between the point estimates of the hot Jupiter occurrence rates would have required a metallicity difference of 0.2--0.3~dex. \item \cite{Bouma2018} used simple analytic models to argue that the observational biases arising from unrecognized binaries in the \textit{Kepler}\xspace sample are not large enough to resolve this discrepancy. \item \cite{Moe2020} advanced a hypothesis that binarity is responsible for the discrepancies after all, due to an astrophysical effect. Hot Jupiters may not be able to form around a star with a close stellar companion ($a < 100$~AU). Because close binaries are systematically excluded from radial-velocity surveys, the inferred occurrence rate of hot Jupiters would be higher than in transit surveys that do not exclude close binaries. This would still leave unexplained the relatively high occurrence rate derived from CoRoT survey \citep{Deleuil2018}. \end{itemize} If the true occurrence rate of hot Jupiters around FGK stars is higher than has been inferred from the \textit{Kepler}\xspace sample, than the matching procedure we used in this study would underestimate the number of hot Jupiters that we expect to find around nearby bright stars. In that case, the ``possibly complete'' sample that we discussed in Section \ref{ssec:complete_samples} would be farther from complete than it originally appeared. \section{Summary and Conclusions \label{sec:concusion}} To better understand the origins of hot Jupiters, it would help to have a better census of their properties: the distributions and correlations between their periods, radii, masses, orbital parameters, host star parameters, and occurrence of companion planets and companion stars. Currently, the census that is easiest to interpret statistically is the sample of about 40 hot Jupiters found by the \textit{Kepler}\xspace mission, which was capable of detecting transiting hot Jupiters with $\gtrsim\,99\%$ probability around more than one hundred thousand FGK stars. The total number of known hot Jupiters is an order of magnitude larger than the number in the \textit{Kepler}\xspace sample, but demographic studies cannot yet take full advantage of this much larger sample size because the unknown and undoubtedly complex selection functions of the many surveys that have found hot Jupiters. We did not attempt to model these selection functions. Instead, based on the simpler considerations of the distribution of apparent magnitudes and the occurrence of transiting hot Jupiters in the \textit{Kepler}\xspace survey, we have shown that the current sample of transiting hot Jupiters is consistent with being complete down to a limiting apparent magnitude of 10.5, if we exclude the region of the sky within $10^\circ$ of the galactic plane. We examined this subsample of hot Jupiter hosts alongside the \textit{Kepler}\xspace sample to compare the distributions of orbital periods and planet radii, which are indistinguishable, at this stage, after accounting for differences in the color-magnitude distribution of the host stars. If there are any differences, or if the longstanding discrepancies between the hot Jupiter occurrence rates measured in different surveys turn out to have interesting astrophysical origins, then only larger samples of planets will reveal them. We showed that our current knowledge of hot Jupiter demographics is far from complete. We quantified the limiting magnitude and galactic latitudes of stars around which we need to search for hot Jupiters, finding that many planets remain to be found even around relatively bright stars, which should be easily detectable by the TESS, and which are also nearby and bright enough to appear in many other all-sky surveys, such as the \textit{Gaia}\xspace astrometric survey and the APOGEE spectroscopic surveys. The overlap of these surveys may allow us to discover new connections between hot Jupiters, the properties of their host stars, and the presence of wide-orbiting companions. To increase the size of the statistically useful sample of hot Jupiters by an order of magnitude, we should aim for a complete sample of hot Jupiters down to a limiting apparent magnitude of about $G = 12.5$. Around such stars, we expect $424^{+98}_{-83}$ transiting (and non-grazing) hot Jupiters, of which 154 are already known, leaving approximately 250 to be discovered and confirmed. This would represent a significant effort but also a major advance in our understanding of hot Jupiter origins. TESS presents an immediate opportunity to perform this task, thanks to its nearly all-sky coverage, 27-day observing baseline, and high photometric precision, enabling the construction of a nearly complete sample of HJs down to 12th magnitude \citep{Sullivan2015,Zhou2019}, while the homogeneous and continuous data will make it possible to understand the selection function much better than those of previous ground-based transit surveys. \acknowledgements This research made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. The work also made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. Work by SWY and JNW was funded by the Heising-Simons Foundation and the TESS project (NASA contract NNG14FC03C). JH acknowledges support from the TESS GI Program, programs G011103 and G022117, through NASA grants 80NSSC19K0386 and 80NSSC19K1728. \facility{Exoplanet Archive} \software{ \texttt{astropy} \citep{Astropy13,Astropy18}; \texttt{numpy} \citep{Numpy}; \texttt{scipy} \citep{Scipy}; \texttt{matplotlib} \citep{Matplotlib}.}
2109.12489
\section{Introduction} The package \pkg{hhsmm}, developed in the \proglang{R} language \citep{r10}, involves new tools for modeling multivariate and multi-sample time series by hidden hybrid Markov/semi-Markov models, introduced by \cite{g05}. A hidden hybrid Markov/semi-Markov Model (HHSMM) is a model with both Markovian and semi-Markovian states. This package is available from the Comprehensive R Archive Network (CRAN) at \url{https://cran.r-project.org/package=hhsmm}. The hidden hybrid Markov/semi-Markov models have many applications for the situations in which there are absorbing or macro-states in the model. These flexible models decrease the time complexity of the hidden semi-Markov models, preserving their prediction power. Another important application of the hidden hybrid Markov/semi-Markov models is in the genetics, where we aim to analysis of the DNA sequences including long interionic zones. Several packages are available for modeling hidden Markov and semi-Markov models. Some of the packages developed in the \proglang{R} language are \pkg{depmixS4} \citep{vs10}, \pkg{HiddenMarkov} \citep{h06}, \pkg{msm} \citep{j07}, \pkg{hsmm} \citep{bea10} and \pkg{mhsmm} \citep{oh11}. The packages \pkg{depmixS4}, \pkg{HiddenMarkov} and \pkg{msm} only consider hidden Markov models (HMM), while the two packages \pkg{hsmm} and \pkg{mhsmm} focus on hidden Markov and hidden semi-Markov (HSMM) models from single and multiple sequences, respectively. These packages do not include hidden hybrid Markov/semi-Markov models, which are included in the \pkg{hhsmm} package. The \pkg{mhsmm} package has some tools for fitting the HMM and HSMM models to the multiple sequences, while the \pkg{hsmm} package has not such a capability. Such a capability is preserved in the \pkg{hhsmm} package. Furthermore, the \pkg{mhsmm} package is equipped with the ability to define new emission distributions, by using the \code{mstep} functions, which is also preserved for the \pkg{hhsmm} package. In addition to all these differences, the \pkg{hhsmm} package is distinguished from \pkg{hsmm} and \pkg{mhsmm} packages in the following features: \begin{itemize} \item Some initialization tools are developed for initial clustering, parameter estimation, and model initialization; \item The left-to-right models, { which are the models in which the process goes from a state to the next state and never comes back to the previous state, such as the health states of a system and the states of a phoneme in speech recognition,} are considered; \item { The ability to initialize, fit and predict the models based on data sets containing missing values, using the EM algorithm and imputation methods, is involved; } \item { The regime Markov/semi-Markov switching linear and additive regression model as well as the auto-regressive HHSMM are involved; } \item { The nonparametric estimation of the emission distribution using penalized B-splines is added;} \item The prediction of the future states is involved; \item The estimation of the residual useful lifetime (RUL), { which is the remaining time to the failure of a system (the last state of the left-to-right model, considered for the health of a system),} is developed for the left-to-right models used in the reliability theory applications; \item The continuous sojourn time distributions are considered in their correct form; \item The Commercial Modular Aero-Propulsion System Simulation (\code{CMAPSS}) data set is included in the package. \end{itemize} There are also tools for modeling HMM in other languages. For instance, the \pkg{hmmlearn} library in \proglang{Python} or \code{hmmtrain} and \code{hmmestimate} functions in Statistics and Machine Learning Toolbox of \proglang{Matlab} are available for modeling HMM, while none of them are not suitable for modeling HSMM or HHSMM. The remainder of the paper is organized as follows. In Section \ref{s2}, we introduce the hidden hybrid Markov/semi-Markov models (HHSMM), proposed by \cite{g05}. Section \ref{s3} presents a simple example of the HHSMM model and the \pkg{hhsmm} package. { Section \ref{s5} presents special features of the \pkg{hhsmm} package including tools for handling missing values, initialization tools, the nonparametric mixture of B-splines for estimation of the emission distribution, regime (Markov/semi-Markov) switching regression, and auto-regressive HHSMM, prediction of the future state sequence, residual useful lifetime (RUL) estimation for reliability applications, continuous-time sojourn distributions, and some other features of the hhsmm package.} Finally, the analysis of two real data sets is considered in Section \ref{rdas}, to illustrate the application of the \pkg{hhsmm} package. \section{Hidden hybrid Markov/semi-Markov models }\label{s2} { Consider a sequence of observations $\{X_t\}$, which is observed for $t = 1,\ldots, T$. Assume that the distribution of $X_t$ depends on an un-observed (latent) variable $S_t$, called \emph{state}. If the sequence $\{S_t\}$ is a Markov chain of order 1, and for any $t\geq 1$, $X_t$ and $X_{t+1}$ are conditionally independent, given $S_t$, then the sequence $\{(S_t, X_t)\}$ forms a hidden Markov model (HMM). A graphical representation of the dependence structure of the HMM is shown in Figure \ref{hmmg}. \begin{figure} \centerline{\includegraphics[scale=0.5]{hmmg.png}} \caption{A graphical representation of the dependence structure of the HMM.}\label{hmmg} \end{figure} The parameters of an HMM are the \emph{initial probabilities} of the states, the \emph{transition probability} matrix of states, and the parameters of the conditional distribution of observations given states, which is called the \emph{emission} distribution. The time spent in a state is called the sojourn time. In the HMM model, the sojourn time distribution is simply proved to be geometric distribution. The hidden semi-Markov model (HSMM) is similar to HMM, while the sojourn time distribution can be any other distribution, including discrete and continuous distributions with positive support, such as Poisson, negative binomial, logarithmic, nonparametric, gamma, Weibull, log-normal, etc. } { The hidden hybrid Markov/semi-Markov model (HHSMM), introduced by \cite{g05}, is a combination of the HMM and HSMM models.} It is defined, for $t=0,\ldots,\tau-1$ and $j=1,\ldots,J$, by the following parameters: \begin{enumerate} \item initial probabilities $\pi_j = P(S_0 = j),\; \sum_{j}\pi_j = 1$, \item transition probabilities, which are \begin{itemize} \item for a semi-Markovian state $j$, $$p_{jk} = P(S_{t+1} =k|S_{t+1} \neq j,S_t=j), \; \forall k\neq j;\; \sum_{k\neq j}p_{jk} = 1 ;\; p_{jj} = 0$$ \item for a Markovian state $j$, $$ \tilde{p}_{jk} = P(S_{t+1} = k|S_t = j);\; \sum_{k}\tilde{p}_{jk} = 1$$ \end{itemize} By the above definition, { any \emph{absorbing state}, which is a state $j$ with $p_{jj} = 1$, is Markovian. This means that if we want to conclude an absorbing state along with some semi-Markovian states in the model, we need to use the HHSMM model. } \item emission distribution parameters, $\theta$, for the following distribution function $$f_j(x_t) = f(x_t | S_t = j; \; \theta)$$ \item the sojourn time distribution is defined for semi-Markovian state $j$, as follows $$d_j(u) = P(S_{t+u+1} \neq j,\;S_{t+u-\nu} = j, \;\nu = 0, \ldots,u-2|S_{t+1} = j ,\; S_t \neq j), \quad u = 1,\ldots,M_j,$$ where $M_j$ stands for an upper bound to the time spent in state $j$. Also, the survival function of the sojourn time distribution is defined as $D_j(u) = \sum_{\nu\geq u}d_j(\nu)$. \end{enumerate} For a Markovian state $j$, the sojourn time distribution is the geometric distribution with the following probability mass function $$d_j(u) = (1-\tilde{p}_{jj})\tilde{p}_{jj}^{u-1},\quad u = 1,2,\cdots $$ { The parameter estimation of the model is performed via the \emph{EM algorithm} \citep{dea77}. The EM algorithm consists of two steps. In the first step, which is called the \emph{E-step}, the conditional expectation of the unobserved variables (states) is computed given the observations, which are called the \emph{E-step probabilities}. This step utilizes the \emph{forward-backward} algorithm to calculate the E-step probabilities. The second step is the maximization step (\emph{M-step}). In this step the parameters of the model are updated by maximizing the expectation of the logarithm of the joint probability density/mass function of the observed and unobserved data. A brief description of the EM and forward-backward algorithms, as well as the Viterbi algorithm, is given in the Appendix. The Viterbi algorithm obtains the most likely state sequence, for the HHSMM model. } \subsection{Examples of hidden hybrid Markov/semi-Markov models} Some examples of HHSMM models are as follows: \begin{itemize} \item {\bf Models with macro-states}: The macro-states are series or parallel networks of states with common emission distribution. A semi-Markovian model can not be used for macro-states and a hybrid Markov/semi-Markov model is a good choice in such situations \citep[see][]{cr86, dea98, g05}. \item {\bf Models with absorbing states:} An absorbing state is Markovian by definition. Thus, a model with an absorbing state can not be fully semi-Markovian. \item {\bf Left to right models}: The left-to-right models are useful tools in the reliability analysis of failure systems. Another application of these models is in speech recognition, where the feature sequence extracted from a voice is modeled by a left to right model of states. The transition matrix of a left to right model is an upper triangle matrix with its final diagonal element equal to one, since the last state of a left-to-right model is absorbing. Thus, a hidden hybrid Markov/semi-Markov model might be used in such cases, instead of a hidden fully semi-Markov model. \item {\bf Analysis of DNA sequences}: It is observed that the length of some interionic zones in DNA sequences are approximately, geometrically distributed, while the length of other zones might deviate from the geometric distribution \citep{g05}. \end{itemize} \section{A simple example}\label{s3} To illustrate the application of the \pkg{hhsmm} package for initializing, fitting, and prediction of a hidden hybrid Markov/semi-Markov model, we first propose a simple example. We emphasize that the aim of this example is not comparing different models, while this is an example to show how can we use the flexible options of the package \pkg{hhsmm} for initializing and fitting different models. To do this, we define a model, with two Markovian and one semi-Markovian state, and 2, 3, and 2 mixture components in states 1-3, respectively, as follows. The sojourn time distribution for the semi-Markovian state is considered to be the gamma distribution (see Section \ref{cts}). The Boolean vector \code{semi} is used to define the Markovian and semi-Markovian states. Also, the mixture component proportions are defined using the parameter list \code{mix.p}. \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 3 |\bf \color{lightgray} R$>$| initial <- c(1, 0, 0) |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| P <- matrix(c(0.8, 0.1, 0.1, 0.5, 0, 0.5, 0.1, 0.2, 0.7), |\bf \color{lightgray} +| nrow = J, byrow=TRUE) |\bf \color{lightgray} R$>$| par <- list(mu = list(list(7 , 8), list(10, 9, 11), |\bf \color{lightgray} +| list(12, 14)), sigma = list(list(3.8, 4.9), |\bf \color{lightgray} +| list(4.3, 4.2, 5.4), list(4.5, 6.1)), |\bf \color{lightgray} +| mix.p = list(c(0.3, 0.7), c(0.2, 0.3, 0.5), c(0.5, 0.5))) |\bf \color{lightgray} R$>$| sojourn <- list(shape = c(0, 3, 0), scale = c(0, 10, 0), |\bf \color{lightgray} +| type = "gamma") |\bf \color{lightgray} R$>$| model <- hhsmmspec(init = initial, transition = P, |\bf \color{lightgray} +| parms.emis = par, dens.emis = dmixmvnorm, |\bf \color{lightgray} +| sojourn = sojourn, semi = semi) \end{lstlisting} Now, we simulate \code{train} and \code{test} data sets, using the \code{simulate} function. The \code{remission} argument is considered to be \code{rmixmvnorm}, which is a function for random sample generation from mixture of multivariate normal distributions. The data sets are plotted using the \code{plot} function. The plots of the \code{train} and \code{test} data sets are presented in Figures \ref{1} and \ref{2}, respectively. Different states are distinguished with different colors in the horizontal axis. \begin{lstlisting} |\bf \color{lightgray} R$>$| train <- simulate(model, nsim = c(50, 40, 30, 70), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) |\bf \color{lightgray} R$>$| test <- simulate(model, nsim = c(80, 45, 20, 35), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) |\bf \color{lightgray} R$>$| plot(train) |\bf \color{lightgray} R$>$| plot(test) \end{lstlisting} \begin{figure} \centerline{\includegraphics[scale=0.5]{simple-example1.png}} \caption{The plots for 4 sequences of \code{train} data set.}\label{1} \end{figure} \begin{figure} \centerline{\includegraphics[scale=0.5]{simple-example1-2.png}} \caption{The plots for 4 sequences of \code{test} data set.}\label{2} \end{figure} In order to initialize the parameters of the HHSMM model, we first obtain an initial clustering of the \code{train} data set, using the \code{initial\_cluster} function. The \code{nstate} argument is set to 3, and the number of mixture components in the three states is set to \code{c(2,2,2)}. The \code{ltr} and \code{final.absorb} arguments is set to \code{FALSE}, which means that the model is not left-to-right and the final element of each sequence is not in an absorbing state. Thus, the \code{kmeans} algorithm \citep{ll82} is used for the initial clustering. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train, nstate = 3, nmix = c(2, 2, 2), |\bf \color{lightgray} +| ltr = FALSE, final.absorb = FALSE, verbose = FALSE) \end{lstlisting} Now, we initialize the model parameters using the \code{initialize\_model} function. The initial clustering output \code{clus} is used for estimation of the parameters. The \code{sojourn} time distribution is set to \code{"gamma"} distribution. First, we use the true value of the \code{semi} vector for modeling. Thus, the initialized model is a hidden hybrid Markov/semi-Markov model. \begin{lstlisting} |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| initmodel1 = initialize_model(clus = clus, sojourn = "gamma", |\bf \color{lightgray} +| M = max(train$N), semi = semi) \end{lstlisting} The model is then fitted using the \code{hhsmmfit} function as follows. The initialized model \code{initmodel1} is used as the start value. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train, model = initmodel1, M = max(train$N), |\bf \color{lightgray} +| par = list(verbose = FALSE)) \end{lstlisting} The log-likelihood trend can also be extracted and plotted as follows. This plot is presented in Figure \ref{3}. \begin{lstlisting} |\bf \color{lightgray} R$>$| plot(fit1$loglik[-1], type = "b", ylab = "Log-likelihood", |\bf \color{lightgray} +| xlab = "Iteration") \end{lstlisting} \begin{figure} \centerline{\includegraphics[width=7cm]{simple-example2.png}} \caption{The log-likelihood trend during the model fitting. }\label{3} \end{figure} {One can observe, for instance, the estimated initial probabilities, transition matrix, and the estimated parameters of the sojourn distribution as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1$model$init [1] 0.6261528 0.1144543 0.2593930 |\bf \color{lightgray} R$>$| fit1$model$transition [,1] [,2] [,3] [1,] 9.638477e-01 0.02294345 0.01320884 [2,] 4.571514e-01 0.00000000 0.54284857 [3,] 3.460872e-10 0.09880954 0.90119046 |\bf \color{lightgray} R$>$| fit1$model$sojourn $shape [1] 0.0000000 0.9714843 0.0000000 $scale [1] 0.0000 21.2017 0.0000 $type [1] "gamma" \end{lstlisting}} The state sequence is now predicted using the default method \code{"viterbi"} of the \code{predict} function for the \code{test} data set. Because of the displacement property of the states, the homogeneity of the predicted states is computed using the \code{homogeneity} function for three states. { Since the states are indeed clusters, the homogeneity measures, which are used for clustering, are useful for measuring the homogeneity of two sequences of state. The homogeneity of a specified cluster (state) in two sequences, is defined as the percentage of members of both sequences that are in the same cluster (state) in both sequences.} The output of the \code{homogeneity} function shows the {homogeneity} percent of two sequences of states. \begin{lstlisting} |\bf \color{lightgray} R$>$| yhat1 <- predict(fit1, test) |\bf \color{lightgray} R$>$| homogeneity(yhat1$s , test$s) [1] 0.9191686 0.8564920 0.7553957 \end{lstlisting} Now, we initialize and fit a fully Markovian model (HMM) by setting \code{semi} to \code{c(FALSE,FALSE,FALSE)}. The same clustering output \code{clus} can be used here. \begin{lstlisting} |\bf \color{lightgray} R$>$| semi <- c(FALSE, FALSE, FALSE) |\bf \color{lightgray} R$>$| initmodel2 = initialize_model(clus = clus, M = max(train$N), |\bf \color{lightgray} +| semi = semi) \end{lstlisting} The model is again fitted using \code{hhsmmfit} function. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit2 = hhsmmfit(x = train, model = initmodel2, M = max(train$N), |\bf \color{lightgray} +| par = list(lock.init = TRUE, verbose = FALSE)) \end{lstlisting} We can compare some of the estimated parameters of this model with those of the previous one. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit2$model$init [1] 0.3333333 0.3333333 0.3333333 |\bf \color{lightgray} R$>$| fit2$model$transition [,1] [,2] [,3] [1,] 9.681322e-01 0.01670787 0.01515991 [2,] 1.695894e-02 0.96554352 0.01749754 [3,] 4.716631e-16 0.08631537 0.91368463 \end{lstlisting} Now, we predict the state sequence of the fitted model and compute its homogeneity with the true state sequence. \begin{lstlisting} |\bf \color{lightgray} R$>$| yhat2 <- predict(fit2, test) |\bf \color{lightgray} R$>$| homogeneity(yhat2$s , test$s) [1] 0.9237875 0.8609272 0.8400000 \end{lstlisting} Finally, we initialize and fit a full semi-Markov model (HSMM) to the \code{train} data set, by setting \code{semi} to \code{c(TRUE,TRUE,TRUE)}. The \code{"gamma"} distribution is considered as the sojourn time distribution for all states. \begin{lstlisting} |\bf \color{lightgray} R$>$| semi <- c(TRUE, TRUE, TRUE) |\bf \color{lightgray} R$>$| initmodel3 = initialize_model(clus=clus, sojourn = "gamma", |\bf \color{lightgray} +| M = max(train$N), semi = semi) |\bf \color{lightgray} R$>$| fit3 = hhsmmfit(x = train, model = initmodel3, M = max(train$N), |\bf \color{lightgray} +| par = list(verbose = FALSE)) |\bf \color{lightgray} R$>$| fit3$model$transition [,1] [,2] [,3] [1,] 0.000000e+00 0.02704853 0.9729515 [2,] 2.242357e-01 0.00000000 0.7757643 [3,] 4.037597e-06 0.99999596 0.0000000 |\bf \color{lightgray} R$>$| fit3$model$sojourn $shape [1] 4.2375333 3.4556658 0.1567049 $scale [1] 8.802259 4.408482 31.021809 $type [1] "gamma" \end{lstlisting} The prediction and homogeneity computation for this model is done as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| yhat3 <- predict(fit3, test) |\bf \color{lightgray} R$>$| homogeneity(yhat3$s , test$s) [1] 0.9232737 0.8069414 0.7358491 \end{lstlisting} \section{Special features of the package}\label{s5} The \pkg{hhsmm} package has several special features, which are described in the following subsections. {\subsection{Handling missing values} The \pkg{hhsmm} package is equipped with tools for handling data sets with missing values. A special imputation algorithm is used in the \code{initial\_cluster} function. This algorithm, imputes a completely missed row of the data with the average of its previous and next rows, while if some columns are missed, the predictive mean matching method of the function \code{mice} from package \pkg{mice} \citep{mice}, with $m=1$, is used to initially impute the missing values. After performing the initial clustering and initial estimation of the parameters of the model, the \code{miss\_mixmvnorm\_mstep} function is considered, as the M-step function of the EM algorithm, for initializing and fitting the model. The function \code{miss\_mixmvnorm\_mstep} includes computation of the conditional means and conditional second moments of the missing values given observed values in each iteration of the EM algorithm and updating the parameters of the Gaussian mixture emission distribution, using the \code{cov.miss.mix.wt} function. Furthermore, an approximation of the mixture component weights using the observed values and conditional means of the missing values given observed values is used in each iteration. The values of the emission density function, used in the E-step of the EM algorithm are computed by replacing the missing values with their conditional means given the observed values. Here, we provide a simple example to examine the performance of the aforementioned method. First, we define a model with three states and two variables. \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 3 |\bf \color{lightgray} R$>$| initial <- c(1, 0, 0) |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| P <- matrix(c(0.8, 0.1, 0.1, 0.5, 0, 0.5, 0.1, 0.2, 0.7), |\bf \color{lightgray} +| nrow = J, byrow = TRUE) |\bf \color{lightgray} R$>$| par <- list(mu = list(list(c(7, 17), c(8, 18)), |\bf \color{lightgray} +| list(c(15, 25), c(14, 24), c(16, 16)), |\bf \color{lightgray} +| list(c(0, 10), c(2, 12))), |\bf \color{lightgray} +| sigma = list(list(diag(c(2.8, 4.8)), diag(c(3.9, 5.9))), |\bf \color{lightgray} +| list(diag(c(3.3, 5.3)), diag(c(3.2, 5.2)), |\bf \color{lightgray} +| diag(c(4.4, 6.4))), list(diag(c(3.5, 5.5)), diag(c(5.1, 7.1)))), |\bf \color{lightgray} +| mix.p = list(c(0.3, 0.7), c(0.2, 0.3, 0.5), c(0.5, 0.5))) |\bf \color{lightgray} R$>$| sojourn <- list(shape = c(0, 3, 0), scale = c(0, 10, 0), |\bf \color{lightgray} +| type = "gamma") |\bf \color{lightgray} R$>$| model <- hhsmmspec(init = initial, transition = P, |\bf \color{lightgray} +| parms.emis = par, dens.emis = dmixmvnorm, |\bf \color{lightgray} +| sojourn = sojourn, semi = semi) \end{lstlisting} Now, we simulate the complete train and test data sets. \begin{lstlisting} |\bf \color{lightgray} R$>$| train <- simulate(model, nsim = c(10, 8, 8, 18), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) |\bf \color{lightgray} R$>$| test <- simulate(model, nsim = c(8, 6, 6, 15), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) \end{lstlisting} First, we initialize and fit the model with complete data sets. To do this, we first use the \code{initial\_cluster} to provide an initial clustering of the \code{train} data set. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train, nstate = 3, nmix = c(2, 2, 2), |\bf \color{lightgray} +| ltr = FALSE, final.absorb = FALSE, verbose = FALSE) \end{lstlisting} Now, we initialize and fit the model. \begin{lstlisting} |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| initmodel1 = initialize_model(clus = clus, sojourn = "gamma", |\bf \color{lightgray} +| M = max(train$N), semi = semi) |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train, model = initmodel1, M = max(train$N), |\bf \color{lightgray} +| par = list(verbose = FALSE)) \end{lstlisting} Finally, we predict the state sequence of the test data set, using the \code{predict.hhsmm} function and the default \code{"viterbi"} method. \begin{lstlisting} |\bf \color{lightgray} R$>$| yhat1 <- predict(fit1, test) \end{lstlisting} To examine the tools for modeling the data sets with missing values, we randomly select some elements of the \code{train} and \code{test} data sets and replace them with \code{NA}, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| p = ncol(train$x) |\bf \color{lightgray} R$>$| n = nrow(train$x) |\bf \color{lightgray} R$>$| sammissless = sample(1:n, trunc(n / 10)) |\bf \color{lightgray} R$>$| sammissall = sample(1:n, trunc(n / 20)) |\bf \color{lightgray} R$>$| misrat = matrix(rbinom(trunc(n / 10) * p, 1, 0.2), |\bf \color{lightgray} +| trunc(n / 10), p) |\bf \color{lightgray} R$>$| train$x[sammissall, ] <- NA |\bf \color{lightgray} R$>$| for(i in 1:trunc(n / 10)) |\bf \color{lightgray} +| train$x[sammissless[i], misrat[i,] == 1] <- NA |\bf \color{lightgray} R$>$| nt = nrow(test$x) |\bf \color{lightgray} R$>$| sammissless = sample(1:nt, trunc(nt / 12)) |\bf \color{lightgray} R$>$| sammissall = sample(1:nt, trunc(nt / 25)) |\bf \color{lightgray} R$>$| misrat = matrix(rbinom(trunc(nt / 12)*p, 1, 0.15), |\bf \color{lightgray} +| trunc(nt / 12), p) |\bf \color{lightgray} R$>$| test$x[sammissall,] <- NA |\bf \color{lightgray} R$>$| for(i in 1:trunc(nt/12)) |\bf \color{lightgray} +| test$x[sammissless[i], misrat[i, ] == 1] <- NA \end{lstlisting} Now, we provide the initial clustering of the incomplete \code{train} data set using the \code{initial\_cluster} function. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train, nstate = 3, nmix = c(2, 2, 2), |\bf \color{lightgray} +| ltr = FALSE, final.absorb = FALSE, verbose = FALSE) \end{lstlisting} We can observe that the output of the \code{initial\_cluster} function contains a flag that indicates the missingness in the data set. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus$miss TRUE \end{lstlisting} Now, we initialize and fit the model for the incomplete data set. \begin{lstlisting} |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| initmodel2 = initialize_model(clus = clus, sojourn = "gamma", |\bf \color{lightgray} +| M = max(train$N), semi = semi) |\bf \color{lightgray} R$>$| fit2 = hhsmmfit(x = train, model = initmodel2, |\bf \color{lightgray} +| M = max(train$N), par = list(lock.init = TRUE, verbose = FALSE)) \end{lstlisting} similarly, we predict the state sequence of the incomplete test data set, using the \code{predict.hhsmm} function. \begin{lstlisting} |\bf \color{lightgray} R$>$| yhat2 <- predict(fit2, test) \end{lstlisting} We can observe that the homogeneity of the predictions of the complete and incomplete data sets are very close to each other. \begin{lstlisting} |\bf \color{lightgray} R$>$| homogeneity(yhat1$s, test$s) [1] 0.8487395 0.9793814 0.0000000 |\bf \color{lightgray} R$>$| homogeneity(yhat2$s, test$s) [1] 0.9830508 0.8595041 0.0000000 \end{lstlisting}} \subsection{Tools and methods for initializing model}\label{initsec} To initialize the HHSMM model, we need to obtain an initial clustering for the train data set. For a left to right model (option \code{ltr = TRUE} of the \code{initial\_cluster} function), we propose Algorithm \ref{alg2}, which uses Algorithm \ref{alg1}, for a left to right initial clustering, which are included in the function \code{ltr\_clus} of the \pkg{hhsmm} package. These algorithms use Hotelling's T-squared test statistic as the distance measure for clustering. The simulations and real data analysis show that the starting values obtained by the proposed algorithm perform well for a left to right model (see Section \ref{rdas} for a real data application). If the model is not a left to right model, then the usual K-means algorithm is used for clustering. Furthermore, the K-means algorithm is used within each initial state to cluster data for mixture components. The number of mixture components can be determined automatically, using the option \code{nmix = "auto"}, by analysis of the within sum of squares obtained from the \code{kmeans} function. The number of starting values of the \code{kmeans} is set to 10, for the stability of the results. The initial clustering is performed using the \code{initial\_cluster} function. \begin{algorithm} \caption{The left to right clustering algorithm for two clusters.} \label{alg1} \begin{algorithmic} \item For $s = 2,\ldots,k-2$ consider partitions $\{ 1, \ldots ,k\} = \{ 1, \ldots, s\} \cup \{ s+1, \ldots, k\} $ and compute the means $$\bar{X}_{1s} = \frac{1}{s} \sum_{i=1}^s X_i, \quad \bar{X}_{2s} = \frac{1}{k-s} \sum_{i=s+1}^k X_i,$$ the variance-covariance matrices $$\Sigma_{1s} = \frac{1}{s-1} \sum_{i=1}^s (X_i-\bar{X}_{1s})(X_i-\bar{X}_{1s})^\top , \Sigma_{2s} = \frac{1}{k-s-1} \sum_{i=s+1}^k (X_i-\bar{X}_{2s})(X_i-\bar{X}_{2s})^\top,$$ and the standardized distances (Hotelling's T-squared test statistic) $$d_s = \frac{(s(k-s)/k)(k-p-1)}{(k-2)p}(\bar{X}_{1s}-\bar{X}_{2s})^\top\Sigma_{ps}^{-1} (\bar{X}_{1s}-\bar{X}_{2s}),$$ where $$\Sigma_{ps}=\frac{(s-1)\Sigma_{1s}+(k-s-1)\Sigma_{2s}}{k-2}.$$ \item Let $s^* = \arg\max_s d_s$. \item If $d_{s^*} > F_{(0.05;p,k-1-p)}$, the clusters would be $\{ 1, \ldots, s^*\} $ and $ \{ s^*+1, \ldots, k\}$, otherwise, no clustering will be done, where $F_{(0.05;p,k-1-p)}$ stands for the 95th percentile of the F distribution with $p$ and $k-1-p$ degrees of freedom. \end{algorithmic} \end{algorithm} After obtaining the initial clustering, the initial estimates of the parameters of the mixture of multivariate normal emission distribution are obtained. Furthermore, the parameters of the sojourn time distribution is obtained by running the method of moments estimation algorithms on the time duration observations of the initial clustering of each state. If we set \code{sojourn = "auto"} in the \code{initialize\_model} function, the best sojourn time distribution is selected from the list of available sojourn time distributions, using the Chi-square goodness of fit testing on the initial cluster data of all states. \begin{algorithm} \caption{The left to right clustering algorithm for $K>2$ clusters.} \label{alg2} \begin{algorithmic} \item Let Nclust = 1. While Nclust $< K$ and the clusters change, do \begin{itemize} \item for all clusters run Algorithm \ref{alg1} to obtain two clusters \end{itemize} \item If Nclust $> K$, while Nclust $= K$ do \begin{itemize} \item merge clusters with minimum $d_{s^*}$ values its closest neigbour on its right or left. \end{itemize} \end{algorithmic} \end{algorithm} { \subsection{Nonparametric mixture of B-splines emission} Usually, the emission distribution belongs to a parametric family of distributions. Although the mixture of multivariate normals is shown to be a good choice in many practical situations, there are also examples in which this class of emission distribution fails to model the skewness and tail weight of the data set. Furthermore, the choice of the number of components of the mixture distribution in each state is a challenge of using mixture of multivariate normals as the emission distribution. As an alternative to parametric emission distribution, HMMs and HSMMs with non-parametric estimates of state-dependent distributions are shown to be more parsimonious in terms of the number of states, easier to interpret, and well fitted to the data \citep{let15,aea19}. The proposed nonparametric estimation approach of \cite{let15} and \cite{aea19} is based on the idea of representing the densities of the emission distributions as linear combinations of B-spline basis functions, by adding a smoothing penalty term to the quasi-log-likelihood function. In this model, the emission distribution is defined as follows \begin{equation}\label{nmbse} f_j(x) = \sum_{k=-K}^{K} a_{j,k} \phi_k(x), \quad j=1,\ldots,J, \end{equation} where $\{\phi_{-K}(\cdot),\ldots,\phi_{K}(\cdot)\}$ is a sequences of B-splines and $ \{a_{j,k}\}$ is the sequences of unknown coefficients to be estimated. These parameters are estimated in the M-step of the EM algorithm, by maximizing the following penalized quasi-log-likelihood function \begin{equation}\label{pqllf} \ell_P^{\rm HHSMM}(\theta,\lambda) = log(L^{\rm HHSMM}(\theta)) - \frac{1}{2}\sum_{j=1}^J \lambda_j \sum_{k=-K+2}^{K} (\Delta^2 a_{j,k})^2, \end{equation} in which $L^{HHSMM}(\theta)$ the quasi-likelihood of the HHSMM model, $\theta$ is the parameters of the model, $\Delta a_k = a_k - a_{k-1}$, $\Delta^2 a_k = \Delta (\Delta a_k)$, and $\lambda_1,\ldots,\lambda_J$ are the smoothing parameters, which are estimated as follows \citep{sk12} $$ \hat{\lambda}_j = \frac{{\rm df}(\hat{\lambda}_j) - p}{\sum_{k=-K+2}^{K}(\Delta^2 \hat{a}_{j,k})^2}, $$ where $p$ is the dimension of the data, $${\rm df}(\hat{\lambda}_j)={\rm tr}\left(H^{-1}(\hat{a}_j;\lambda_j = \hat{\lambda}_j) H(\hat{a}_j;\lambda_j = 0)\right),$$ and $H(\hat{a};\lambda)$ is the hessian matrix of the log-quasi-likelihood at $\hat{a}$ with the specified value of $\lambda$. To illustrate the application of the \pkg{hhsmm} package with a nonparametric mixture of B-splines emission distribution, we present a simple simulated data example. To do this, we first simulate data from an HHSMM model with a mixture of multivariate normals as the emission distribution, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 3 |\bf \color{lightgray} R$>$| initial <- c(1,0,0) |\bf \color{lightgray} R$>$| semi <- c(FALSE,TRUE,FALSE) |\bf \color{lightgray} R$>$| P <- matrix(c(0.8, 0.1, 0.1, 0.5, 0, 0.5, 0.1, 0.2, 0.7), |\bf \color{lightgray} +| nrow = J, byrow = TRUE) |\bf \color{lightgray} R$>$| par <- list(mu = list( |\bf \color{lightgray} +| list(c(7, 17), c(8, 18)), |\bf \color{lightgray} +| list(c(15, 25), c(14, 24), c(16, 16)), |\bf \color{lightgray} +| list(c(0, 10), c(2, 12))), |\bf \color{lightgray} +| sigma = list(list(diag(c(2.8, 4.8)), diag(c(3.9, 5.9))), |\bf \color{lightgray} +| list(diag(c(3.3, 5.3)), diag(c(3.2, 5.2)), diag(c(4.4, 6.4))), |\bf \color{lightgray} +| list(diag(c(3.5, 5.5)), diag(c(5.1, 7.1)))), |\bf \color{lightgray} +| mix.p = list(c(0.3, 0.7), |\bf \color{lightgray} +| c(0.2, 0.3, 0.5),c(0.5, 0.5))) |\bf \color{lightgray} R$>$| sojourn <- list(shape = c(0,3,0), scale = c(0,10,0), |\bf \color{lightgray} +| type = "gamma") |\bf \color{lightgray} R$>$| model <- hhsmmspec(init = initial, transition = P, |\bf \color{lightgray} +| parms.emis = par, dens.emis = dmixmvnorm, |\bf \color{lightgray} +| sojourn = sojourn, semi = semi) |\bf \color{lightgray} R$>$| train <- simulate(model, nsim = c(10, 8, 8, 18), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) |\bf \color{lightgray} R$>$| test <- simulate(model, nsim = c(8, 6, 6, 15), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) \end{lstlisting} Now, we obtain an initial clustering of the data set using the \code{initial\_cluster} function. Note that for a nonparametric emission distribution, we have no mixture components and we should use the option \code{nmix = NULL}. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train, nstate = 3, nmix = NULL, |\bf \color{lightgray} +| ltr = FALSE, final.absorb = FALSE, verbose = FALSE) \end{lstlisting} In order to initialize a HHSMM with non-parametric estimates of the emission distribution, we use the \code{initialize\_model} function with the arguments \code{mstep = nonpar\_mstep} and \code{dens.emission = dnonpar}, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| initmodel1 = initialize_model(clus = clus, mstep = nonpar_mstep, |\bf \color{lightgray} +| dens.emission = dnonpar, sojourn = "gamma", M = max(train$N), |\bf \color{lightgray} +| semi = semi) \end{lstlisting} Now, we can use the \code{hhsmmfit} function to fit the model. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train, model = initmodel1, M = max(train$N), |\bf \color{lightgray} +| par = list(verbose = FALSE)) \end{lstlisting} Finally, we predict the state sequence of the test data and compute the homogeneity of the predicted sequence and the reals sequence as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| yhat1 <- predict(fit1, test) |\bf \color{lightgray} R$>$| homogeneity(yhat1$s, test$s) [1] 0.9210526 0.8508772 0.8750000 \end{lstlisting} As one can see from the output of the \code{homogeneity} function, the fitted model has a high precision for the prediction of the state sequence of the new data set. { \subsection{Regime (Markov/semi-Markov) switching regression model} \cite{kea08} considered the following Gaussian regime-switching model \begin{equation}\label{rsrm} y_{t} = x_{t}^T \beta_{s_t} + \sigma_{s_t}\epsilon_t, \end{equation} where $y_{t}$ is the response variable, $x_{t}$ is a vector of covariates, which may include lagged values of $y_{t}$ (auto-regressive HHSMM, see the next subsection), $s_t$ is the state, and $\epsilon_t$ is the regression error, which is assumed to be distributed as standard normal distribution, for $t=1,\ldots,T$. Model \eqref{rsrm} can easily be extended to the case of multivariate responses and also to the case of mixture of multivariate normals. The difference between the regime-switching model \eqref{rsrm} and the HHSMM model is that, instead of using the density of $y_{t}$ given $s_t$ in the likelihood function, we use the conditional density of $y_{t}$ given $x_{t}$ and $s_t$. A graphical representation of the regime-switching model is presented in Figure \ref{msregg}. The parameters of the regime switching regression model can be estimated using the EM algorithm. \begin{figure} \centerline{\includegraphics[scale=0.35]{msregg.png}} \caption{Graphical representation of the regime-switching model.}\label{msregg} \end{figure} \cite{let18} considered an extension of the model \eqref{rsrm} to the following additive regime-switching model \begin{equation}\label{rsrm2} y_{t} = \mu_{s_t} + \sum_{j=1}^p f_{j,s_t}(x_{j,t}) + \sigma_{s_t}\epsilon_t, \end{equation} where $f_{j,s_t}(\cdot),\; j=1,...,p$ are unknown regression functions for $p$ covariates. They utilized the penalized B-splines for estimation of the regression functions. The estimation of extensions of models \eqref{rsrm} and \eqref{rsrm2} is considered in \pkg{hhsmm} package, using the \code{mixlm\_mstep} and \code{additive\_reg\_mstep} functions, respectively, as the M-step estimation and \code{dmixlm} and \code{dnorm\_additive\_reg} functions, respectively, which define mixture of multivariate normals and multivariate normal densities, respectively, as the conditional density of the responses given the covariates. The response variables are determined using the argument \code{resp.ind} in all of these functions, with its default value equal to one, which means that the first column of the input matrix \code{x}, is the univariate response variable. To illustrate usage of these functions in the \pkg{hhsmm} package, we present the following simple simulated data example. First, we simulate data using the function \code{simulate.hhsmmspec}, using the argument \code{remission = rmixlm}, \code{covar = list(mean = 0, cov = 1)}. The argument \code{covar} is indeed an argument of the \code{rmixlm} function, which is either a function which generates the covariate vector or a list containing the mean vector and the variance-covariance matrix of covariates to be generated from multivariate normal distribution. The \code{rmixlm} is a function for generation of the data from mixture of linear models, for each state. The list of parameters of this emission distribution consist of the following items: \begin{itemize} \item \code{intercept}, a list of the intercepts of the regression models for each state and each mixture component, \item \code{coefficient}, a list of the coefficient vectors/matrices of the regression models for each state and each mixture component, \item \code{csigma}, a list of the conditional variances/variance-covariance matrices of the response for each state and each mixture component, \item \code{mix.p}, a list of mixture component probabilities for each state. \end{itemize} First, we define the model parameters and simulate the data as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 3 |\bf \color{lightgray} R$>$| initial <- c(1, 0, 0) |\bf \color{lightgray} R$>$| semi <- rep(FALSE, 3) |\bf \color{lightgray} R$>$| P <- matrix(c(0.5, 0.2, 0.3, 0.2, 0.5, 0.3, 0.1, 0.4, 0.5), |\bf \color{lightgray} +| nrow = J, byrow = TRUE) |\bf \color{lightgray} R$>$| par <- list(intercept = list(3, list(-10, -1), 14), |\bf \color{lightgray} +| coefficient = list(-1, list(1, 5), -7), |\bf \color{lightgray} +| csigma = list(1.2, list(2.3, 3.4), 1.1), |\bf \color{lightgray} +| mix.p = list(1, c(0.4, 0.6), 1)) |\bf \color{lightgray} R$>$| model <- hhsmmspec(init = initial, transition = P, |\bf \color{lightgray} +| parms.emis = par, dens.emis = dmixlm, semi = semi) |\bf \color{lightgray} R$>$| train <- simulate(model, nsim = c(20, 30, 42, 50), seed = 1234, |\bf \color{lightgray} +| remission = rmixlm, covar = list(mean = 0, cov = 1)) \end{lstlisting} Now, we obtain an initial clustering of the data using \code{initial\_cluster} function, with the argument \code{regress = TRUE}, which is essential for estimation the parameters of the regime switching regression models. By letting \code{regress = TRUE} and \code{ltr = FALSE} the \code{initial\_cluster} function uses an algorithm similar to that of \cite{ll82} for the K-means method, by fitting linear regression models instead of computing simple means, in each iteration of an algorithm. When using \code{regress = TRUE} and \code{ltr = TRUE}, an algorithm similar to that, described in section \ref{initsec} is used for left-to-right clustering, by using regression coefficients instead of mean vectors, and the associated Hotteling's T-square statistic. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train = train, nstate = 3, |\bf \color{lightgray} +| nmix = 2, ltr = FALSE, final.absorb = FALSE, |\bf \color{lightgray} +| verbose = FALSE, regress = TRUE) \end{lstlisting} We initialize the model, using the \code{initialize\_model} function, with arguments \code{mstep = mixlm\_mstep}, which is a function for M-step estimation of the EM algorithm in the regime switching regression model, and \code{dens.emission = dmixlm}, which is a function for computation of the probability density function of the mixture Gaussian linear model, for a specified observation vector, a specified state and a specified model's parameters. Next, we fit the model, by using the \code{hhsmmfit} function. \begin{lstlisting} |\bf \color{lightgray} R$>$| initmodel = initialize_model(clus = clus, mstep = mixlm_mstep, |\bf \color{lightgray} +| dens.emission = dmixlm, sojourn = NULL, semi = rep(FALSE, 3), |\bf \color{lightgray} +| M = max(train$N), verbose = FALSE) |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train, model = initmodel, mstep = mixlm_mstep, |\bf \color{lightgray} +| M = max(train$N), par = list(lock.init = TRUE, verbose = FALSE)) \end{lstlisting} The plots of the clustered data as well as the estimated regime-switching regression model lines are then plotted as follows. The resulting plot is shown in Figure \ref{regress}. \begin{lstlisting} |\bf \color{lightgray} R$>$| plot(train$x[,1] ~ train$x[,2], col = train$s, pch = 16, |\bf \color{lightgray} +| xlab = "x", ylab = "y") |\bf \color{lightgray} R$>$| abline(fit1$model$parms.emission$intercept[[1]][[1]], |\bf \color{lightgray} +| fit1$model$parms.emission$coefficient[[1]][[1]], col = 1) |\bf \color{lightgray} R$>$| abline(fit1$model$parms.emission$intercept[[1]][[2]], |\bf \color{lightgray} +| fit1$model$parms.emission$coefficient[[1]][[2]], col = 1) |\bf \color{lightgray} R$>$| abline(fit1$model$parms.emission$intercept[[2]][[1]], |\bf \color{lightgray} +| fit1$model$parms.emission$coefficient[[2]][[1]], col = 2) |\bf \color{lightgray} R$>$| abline(fit1$model$parms.emission$intercept[[2]][[2]], |\bf \color{lightgray} +| fit1$model$parms.emission$coefficient[[2]][[2]], col = 2) |\bf \color{lightgray} R$>$| abline(fit1$model$parms.emission$intercept[[3]][[1]], |\bf \color{lightgray} +| fit1$model$parms.emission$coefficient[[3]][[1]], col = 3) |\bf \color{lightgray} R$>$| abline(fit1$model$parms.emission$intercept[[3]][[2]], |\bf \color{lightgray} +| fit1$model$parms.emission$coefficient[[3]][[2]], col = 3) \end{lstlisting} \begin{figure} \centerline{\includegraphics[scale=0.6]{regress.png}} \caption{The regime Markov switching example}\label{regress} \end{figure} To fit the regime-switching additive regression model to the \code{train} data, we make an initial clustering of the data, using the \code{initial\_cluster} function, by letting \code{nstate = 3}, \code{nmix = NULL} and \code{regress = TRUE}. Using the argument \code{nmix = NULL} is essential in this case, since the parameters of the regime switching additive regression model does not involve mixture components. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train = train, nstate = 3, nmix = NULL, |\bf \color{lightgray} +| verbose = FALSE, regress = TRUE) \end{lstlisting} Now, we initialize the model using the function \code{initialize\_model}, with arguments \code{mstep = additive\_reg\_mstep} and \code{dens.emission = dnorm\_additive\_reg}. Note that here, we only consider a full-Markovian model and thus we let \code{semi = rep(FALSE, 3)} and \code{sojourn = NULL}, while one can also consider HSMM or HHSMM models by considering different \code{semi} and \code{sojourn} arguments. \begin{lstlisting} |\bf \color{lightgray} R$>$| initmodel = initialize_model(clus = clus ,mstep = additive_reg_mstep, |\bf \color{lightgray} +| dens.emission = dnorm_additive_reg, sojourn = NULL, semi = rep(FALSE, 3), |\bf \color{lightgray} +| M = max(train$N), verbose = FALSE) \end{lstlisting} Next, we fit the model by calling the \code{hhsmmfit} function. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train, model = initmodel, mstep = additive_reg_mstep, |\bf \color{lightgray} +| M = max(train$N), par = list(verbose = FALSE)) \end{lstlisting} Again, we plot the data to add the fitted lines. The colors of the points show the true states, while the characters present the predicted states. \begin{lstlisting} |\bf \color{lightgray} R$>$| plot(train$x[, 1] ~ train$x[, 2], col = train$s, pch = fit1$yhat, |\bf \color{lightgray} +| xlab = "x", ylab = "y") |\bf \color{lightgray} R$>$| text(0, 30, "colors are real states",col="red") |\bf \color{lightgray} R$>$| text(0, 28, "characters are predicted states") \end{lstlisting} To obtain the predicted values of the response variable, we use the \code{addreg\_hhsmm\_predict} function as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| pred <- addreg_hhsmm_predict(fit1, train$x[, 2], 5) |\bf \color{lightgray} R$>$| yhat1 <- pred[[1]] |\bf \color{lightgray} R$>$| yhat2 <- pred[[2]] |\bf \color{lightgray} R$>$| yhat3 <- pred[[3]] \end{lstlisting} We add the predicted curves to the plot. The resulting plot is shown in Figure \ref{addregress}. \begin{lstlisting} |\bf \color{lightgray} R$>$| lines(yhat1[order(train$x[, 2])]~sort(train$x[, 2]),col = 2) |\bf \color{lightgray} R$>$| lines(yhat2[order(train$x[, 2])]~sort(train$x[, 2]),col = 1) |\bf \color{lightgray} R$>$| lines(yhat3[order(train$x[, 2])]~sort(train$x[, 2]),col = 3) \end{lstlisting} As one can see from Figure \ref{addregress}, the curves have a proper fit to the data points. \begin{figure} \centerline{\includegraphics[scale=0.6]{addregress.png}} \caption{The Markov regime-switching additive regression fit.}\label{addregress} \end{figure} \subsection{Auto-regressive HHSMM} A special case of the regime-switching regression models \eqref{rsrm} and \eqref{rsrm2} is the auto-regressive HHSMM model, in which we take $x_t = (y_{t-1},\ldots,y_{t-\ell})$, for a specified lag $\ell \geq 1$. Here, we present a simulated data example, to illustrate this special case. The model specification of the auto-regressive HHSMM is similar to that of the regime-switching regression model, noting that the dimension of $x_t$ is always $\ell$ times the dimension of $y_t$. So, we specify the model as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 2 |\bf \color{lightgray} R$>$| initial <- c(1, 0) |\bf \color{lightgray} R$>$| semi <- rep(FALSE, 2) |\bf \color{lightgray} R$>$| P <- matrix(c(0.2, 0.8, 0.1, 0.9), |\bf \color{lightgray} +| nrow = J, byrow = TRUE) |\bf \color{lightgray} R$>$| par <- list(intercept = list(0.5, -0.8), |\bf \color{lightgray} +| coefficient = list(-0.8, 0.7), |\bf \color{lightgray} +| csigma = list(0.5, 0.2), mix.p = list(1, 1)) |\bf \color{lightgray} R$>$| model <- hhsmmspec(init = initial, transition = P, |\bf \color{lightgray} +| parms.emis = par, dens.emis = dmixlm, semi = semi) \end{lstlisting} To simulate the data using the \code{simulate.hhsmm} function, we have to use the argument \code{emission.control = list(autoregress = TRUE)}. We, then, plot the simulated data by using \code{plot.hhsmmdata} as follows. The resulting plot is shown in Figure \ref{areg}. \begin{lstlisting} |\bf \color{lightgray} R$>$| train <- simulate(model, nsim = c(50, 60, 84, 100), |\bf \color{lightgray} +| seed = 1234, emission.control = list(autoregress = TRUE)) |\bf \color{lightgray} R$>$| plot(train) \end{lstlisting} \begin{figure} \centerline{\includegraphics[scale=0.5]{autoregress.png}} \caption{The ARHMM example simulated train data set }\label{areg} \end{figure} To prepare the data for fitting the regime-switching regression model, we should first construct the lagged data matrix by using the function \code{lagdata} as follows. The default of the parameter \code{lags} of this function is equal to 1, which is the number of lags to be calculated. \begin{lstlisting} |\bf \color{lightgray} R$>$| train2 = lagdata(train) \end{lstlisting} The resulting lagged data set is then used for obtaining the initial clustering, using the argument \code{regress = TRUE} and \code{resp.ind = 2} in the \code{initial\_cluster} function as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train = train2, nstate = 2, nmix = 1, |\bf \color{lightgray} +| ltr = FALSE, final.absorb = FALSE, verbose = FALSE, |\bf \color{lightgray} +| regress = TRUE, resp.ind = 2) \end{lstlisting} Now, we initialize and fit the model as before. Note that we should use the argument \code{resp.ind = 2} in place of \code{...} in both functions (see the manual of the \pkg{hhsmm} package \url{https://cran.r-project.org/web/packages/hhsmm/hhsmm.pdf}) \begin{lstlisting} |\bf \color{lightgray} R$>$| initmodel = initialize_model(clus = clus, mstep = mixlm_mstep, |\bf \color{lightgray} +| dens.emission = dmixlm, sojourn = NULL, semi=rep(FALSE, 2), |\bf \color{lightgray} +| M = max(train$N), verbose = FALSE, resp.ind = 2) |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train2, model = initmodel, mstep = mixlm_mstep, |\bf \color{lightgray} +| resp.ind = 2, M = max(train$N), par = list(verbose = FALSE)) \end{lstlisting} To test the performance of the fitted model for prediction of the future time series, we need to simulate a test data set and then right-trim the test data set, using the \code{train\_test\_split}, and by setting \code{train.ratio = 1}, \code{trim = TRUE} and \code{trim.ratio = 0.9}, as follows. As one can see, the length of the sequences in \code{trimmed\_test\$trimmed} is $90\%$ of the associated lengths in \code{test} data set. \begin{lstlisting} |\bf \color{lightgray} R$>$| test <- simulate(model, nsim = c(100, 95), seed = 1234, |\bf \color{lightgray} +| emission.control = list(autoregress = TRUE)) |\bf \color{lightgray} R$>$| trimmed_test = train_test_split(test, train.ratio = 1, |\bf \color{lightgray} +| trim = TRUE, trim.ratio = 0.9) |\bf \color{lightgray} R$>$| test$N [1] 100 95 |\bf \color{lightgray} R$>$| trimmed_test$trimmed$N [1] 90 85 |\bf \color{lightgray} R$>$| trimmed = trimmed_test$trimmed |\bf \color{lightgray} R$>$| tc = trimmed_test$trimmed.count \end{lstlisting} The option \code{train.ratio = 1} means that we do not wish to split the test samples into new train and test subsets and we only need to right trim the sequences. Now, we have both trimmed sequences in \code{trimmed} object and the complete test samples in \code{test} data set, so that we can compare the true and predicted states. The object \code{tc} contains the number of trimmed items in each sequence, which has to be predicted. Now, we use the estimated parameters of the ARHMM to predict the future values of the sequence. To do this, we predict the state sequence of the lagged trimmed test data set using the \code{predict.hhsmm} function and then we obtain the linear predictors for the future values as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| lag_trimmed = lagdata(trimmed) |\bf \color{lightgray} R$>$| n <- length(tc) |\bf \color{lightgray} R$>$| Ncl <- cumsum(c(0, lag_trimmed$N)) |\bf \color{lightgray} R$>$| Nc2 <- cumsum(c(0, test$N)) |\bf \color{lightgray} R$>$| Nc <- cumsum(c(0, trimmed$N)) |\bf \color{lightgray} R$>$| par(mfrow = c(2,1)) |\bf \color{lightgray} R$>$| for (i in 1:n) { |\bf \color{lightgray} +| new_lag_data = list(x = lag_trimmed$x[(Ncl[i] + 1):Ncl[i + 1], ], |\bf \color{lightgray} +| N = lag_trimmed$N[i]) |\bf \color{lightgray} +| new_data = list(x = as.matrix(trimmed$x[(Nc[i] + 1):Nc[i + 1], ]), |\bf \color{lightgray} +| N = trimmed$N[i]) |\bf \color{lightgray} +| yhat1 <- predict(fit1, new_lag_data, future = tc[i]) |\bf \color{lightgray} +| fstates <- yhat1$s[((test$N[i] - tc[i])):(test$N[i] - 1)] |\bf \color{lightgray} +| intercept = coefficients = csigma = c() |\bf \color{lightgray} +| xcurrent = as.vector(new_data$x[new_data$N, ]) |\bf \color{lightgray} +| pred <- xcurrent |\bf \color{lightgray} +| for(j in 1:tc[i]){ |\bf \color{lightgray} +| intercept[j] <- |\bf \color{lightgray} +| as.vector(fit1$model$parms.emission$intercept[[fstates[j]]]) |\bf \color{lightgray} +| coefficients[j] <- |\bf \color{lightgray} +| as.vector(fit1$model$parms.emission$coefficients[[fstates[j]]]) |\bf \color{lightgray} +| predicted <- intercept[j] + xcurrent * coefficients[j] |\bf \color{lightgray} +| xcurrent <- predicted |\bf \color{lightgray} +| pred <- c(pred, predicted) |\bf \color{lightgray} +| } |\bf \color{lightgray} +| tr_time = ((test$N[i] - tc[i] - 1)):(test$N[i] - 1) + 1 |\bf \color{lightgray} +| plot(test$x[(Nc2[i] + 1):Nc2[i + 1], ], type = "l", xlab = "time", |\bf \color{lightgray} +| ylab = "x", main = paste("sequence", i)) |\bf \color{lightgray} +| lines(pred ~ tr_time, lwd = 3, col = 2) |\bf \color{lightgray} +| } \end{lstlisting} The resulting plot is presented in Figure \ref{pred}. The colored lines are the predicted values. \begin{figure} \centerline{\includegraphics[scale=0.5]{pred.png}} \caption{Trimmed test data set and the predicted values}\label{pred} \end{figure} We try to apply the regime-switching additive regression model to fit the AR-HMM model. Again, the only difference of the initial clustering using the \code{initial\_cluster} function is to set \code{nmix = NULL}. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train = train2, nstate = 2, nmix = NULL, |\bf \color{lightgray} +| verbose = FALSE, regress = TRUE, resp.ind = 2) \end{lstlisting} We initialize the model using the function \code{initialize\_model} by setting \code{mstep = additive\_reg\_mstep} and \code{dens.emission = dnorm\_additive\_reg}. The difference here is that we pass the parameters of these function to the \code{initialize\_model} function through the argument \code{control}. In the following, we pass the response indicator and the degree of the B-splines by setting the argument \code{control = list(resp.ind = 2, K = 7)}. \begin{lstlisting} |\bf \color{lightgray} R$>$| initmodel = initialize_model(clus = clus, mstep = |\bf \color{lightgray} +| additive_reg_mstep, dens.emission = dnorm_additive_reg, |\bf \color{lightgray} +| sojourn = NULL, semi = rep(FALSE, 2), M = max(train$N), |\bf \color{lightgray} +| verbose = FALSE, control = list(resp.ind = 2, K = 7)) \end{lstlisting} Now, we fit the model by calling the \code{hhsmmfit} function, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train2, model = initmodel, |\bf \color{lightgray} +| mstep = additive_reg_mstep, M = max(train$N), |\bf \color{lightgray} +| control = list(resp.ind = 2, K = 7), |\bf \color{lightgray} +| par = list(verbose = FALSE)) \end{lstlisting} Finally, we provide the prediction plots, through the following codes. \begin{lstlisting} |\bf \color{lightgray} R$>$| par(mfrow = c(2,1)) |\bf \color{lightgray} R$>$| for (i in 1:n) { |\bf \color{lightgray} +| new_lag_data = list(x = lag_trimmed$x[(Ncl[i] + 1):Ncl[i + 1], ], |\bf \color{lightgray} +| N = lag_trimmed$N[i]) |\bf \color{lightgray} +| new_data = list(x = as.matrix(trimmed$x[(Nc[i] + 1):Nc[i + 1], ]), |\bf \color{lightgray} +| N = trimmed$N[i]) |\bf \color{lightgray} +| yhat1 <- predict(fit1, new_lag_data, future = tc[i]) |\bf \color{lightgray} +| fstates <- yhat1$s[((test$N[i] - tc[i])):(test$N[i] - 1)] |\bf \color{lightgray} +| tr_time = ((test$N[i] - tc[i] - 1)):(test$N[i] - 1) + 1 |\bf \color{lightgray} +| intercept = c() |\bf \color{lightgray} +| coefficients = list() |\bf \color{lightgray} +| xcurrent = as.vector(new_data$x[new_data$N, ]) |\bf \color{lightgray} +| pred <- xcurrent |\bf \color{lightgray} +| for(j in 1:tc[i]){ |\bf \color{lightgray} +| preds <- addreg_hhsmm_predict(fit1, c(xcurrent,train$x), 7) |\bf \color{lightgray} +| predicted <- preds[[fstates[j]]][1] |\bf \color{lightgray} +| xcurrent <- predicted |\bf \color{lightgray} +| pred <- c(pred, predicted) |\bf \color{lightgray} +| } |\bf \color{lightgray} +| plot(test$x[(Nc2[i] + 1):Nc2[i + 1], ], type = "l", xlab = "time", |\bf \color{lightgray} +| ylab = "x", main = paste("sequence", i)) |\bf \color{lightgray} +| lines(pred ~ tr_time, lwd = 3, col = 2) |\bf \color{lightgray} +| } \end{lstlisting} The resulting plots are presented in Figure \ref{pred2}. The colored lines are the predicted values. By comparing the Figures \ref{pred} and \ref{pred2} one can see that the regime-switching additive regression model results in more accurate prediction especially for the second sequence of the \code{test} data set. \begin{figure} \centerline{\includegraphics[scale=0.5]{pred2.png}} \caption{The predicted values using AR-HMM model, using the regime switching additive regression}\label{pred2} \end{figure} }} {\subsection{Prediction of the future state sequence} To predict the future state sequence at times $T+1,\ldots,T+h$, first, we use viterbi (smoothing) algorithm (see the Appendix) to estimate the probabilities of the most likely path $\alpha_j(t)$ ($L_j(t)$) for $j=1,\ldots,J$ and $t=0,\ldots,\tau-1$, as well as the current most likely state $\hat{s}_t^* =\arg\max_{1\leq j\leq J} \alpha_j(t)$ ($\hat{s}_t^* =\arg\max_{1\leq j\leq J} L_j(t)$). Also, we might compute the probabilities \begin{equation}\label{deltabar} \bar{\delta}_t(j)=\frac{\alpha_{j}(t)}{\sum_{k=1}^J\alpha_{k}(t)}\quad (\bar{\delta}_t(j)=\frac{L_{j}(t)}{\sum_{k=1}^JL_{k}(t)}). \end{equation} Next, for $j=1,\ldots,h$, we compute the probability of the next state, by multiplying the transition matrix by the current state probability as follows \begin{equation}\label{52__eq2} \bar{\delta}_{next} = \Big(P\Big)^T\bar{\delta}_{current} \end{equation} Then, the $j$th future state is predicted as \begin{equation}\label{53__eq} \hat{s}_{next}^*=\arg\max_{1\leq j \leq J}\bar{\delta}_{next}(j) \end{equation} This process continues until the required time $T+h$. The prediction of the future state sequence is done using the function \code{predict.hhsmm} function in the \pkg{hhsmm} package, by determining the argument \code{future}, which is equal to zero by default. To examine this ability, we study a simple example as follows. First, we define a simple model, just like the model in section 3, and simulate train and test samples from this model, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 3 |\bf \color{lightgray} R$>$| initial <- c(1, 0, 0) |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| P <- matrix(c(0.8, 0.1, 0.1, 0.5, 0, 0.5, 0.1, 0.2, 0.7), |\bf \color{lightgray} +| nrow = J, byrow = TRUE) |\bf \color{lightgray} R$>$| par <- list(mu = list(list(7, 8),list(10, 9, 11),list(12, 14)), |\bf \color{lightgray} +| sigma = list(list(3.8, 4.9),list(4.3, 4.2, 5.4),list(4.5, 6.1)), |\bf \color{lightgray} +| mix.p = list(c(0.3, 0.7),c(0.2, 0.3, 0.5),c(0.5, 0.5))) |\bf \color{lightgray} R$>$| sojourn <- list(shape = c(0, 3, 0), scale = c(0, 10, 0), |\bf \color{lightgray} +| type = "gamma") |\bf \color{lightgray} R$>$| model <- hhsmmspec(init = initial, transition = P, |\bf \color{lightgray} +| parms.emis = par, dens.emis = dmixmvnorm, |\bf \color{lightgray} +| sojourn = sojourn, semi = semi) |\bf \color{lightgray} R$>$| train <- simulate(model, nsim = c(50, 40, 30, 70), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) |\bf \color{lightgray} R$>$| test <- simulate(model, nsim = c(80, 45, 20, 35), seed = 1234, |\bf \color{lightgray} +| remission = rmixmvnorm) \end{lstlisting} To examine the prediction performance of the model, we split the test sample from the right, using \code{train\_test\_split} function and a trim ratio equal to 0.9, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| tt = train_test_split(test, train.ratio = 1, trim = TRUE, |\bf \color{lightgray} +| trim.ratio = 0.9) |\bf \color{lightgray} R$>$| trimmed = tt$trimmed |\bf \color{lightgray} R$>$| tc = tt$trimmed.count \end{lstlisting} As in Section \ref{s3}, we initialize and fit an HHSMM model to the \code{train} data set, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train, nstate=3, nmix=c(2, 2, 2), |\bf \color{lightgray} +| ltr = FALSE, final.absorb = FALSE, verbose = FALSE) |\bf \color{lightgray} R$>$| semi <- c(FALSE, TRUE, FALSE) |\bf \color{lightgray} R$>$| initmodel1 = initialize_model(clus = clus, sojourn = "gamma", |\bf \color{lightgray} +| M = max(train$N), semi = semi, verbose = FALSE) |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train, model = initmodel1, M = max(train$N), |\bf \color{lightgray} +| par = list(verbose = FALSE)) \end{lstlisting} Now, we predict the future states of each sequence of the test data set, separately, using the option \code{future = tc[i]}. Then, we print the homogeneity of real and predicted state sequences, by using the \code{homogeneity} function, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| n <- length(tc) |\bf \color{lightgray} R$>$| Nc <- cumsum(c(0, trimmed$N)) |\bf \color{lightgray} R$>$| Nc2 <- cumsum(c(0, test$N)) |\bf \color{lightgray} R$>$| for(i in 1:n){ |\bf \color{lightgray} +| newdata = list(x = trimmed$x[(Nc[i] + 1):Nc[i + 1], ], |\bf \color{lightgray} +| N = trimmed$N[i]) |\bf \color{lightgray} +| yhat1 <- predict(fit1, newdata, future = tc[i]) |\bf \color{lightgray} +| yhat2 <- predict(fit1, newdata, future = 0) |\bf \color{lightgray} +| cat("homogeneity with future sequence") |\bf \color{lightgray} +| print(homogeneity(yhat1$s , test$s[(Nc2[i] + 1):Nc2[i + 1]])) |\bf \color{lightgray} +| cat("homogeneity without future sequence") |\bf \color{lightgray} +| print(homogeneity(yhat2$s , trimmed$s[(Nc[i] + 1):Nc[i + 1]])) |\bf \color{lightgray} +| } homogeneity with future sequence[1] 0.8965517 0.8066667 0.6097561 homogeneity without future sequence[1] 0.9079755 0.8066667 0.6097561 homogeneity with future sequence[1] 0.8205128 0.9000000 0.0000000 homogeneity without future sequence[1] 0.9140625 0.9000000 0.0000000 homogeneity with future sequence[1] 1 1 1 homogeneity without future sequence[1] 1 1 1 homogeneity with future sequence[1] 0.9333333 0.8306452 0.6578947 homogeneity without future sequence[1] 0.8750000 0.8306452 0.6578947 \end{lstlisting} As, on can see from the above homogeneities, the predictions are quite good. } \subsection{Residual useful lifetime (RUL) estimation, for reliability applications}\label{s4} The residual useful lifetime (RUL) is defined as the remaining lifetime of a system at a specified time point. If we analyse a reliable system with a hidden Markov or semi-Markov model, a suitable choice would be a left to right model, with the final state as the failure state. The RUL of such model is defined at time $t$ as \begin{eqnarray}\label{47__eq} \mbox{RUL}_t = \tilde{D} :S_{t+\tilde{D}} = J, S_{t+\tilde{D}-1} = i;\hspace{1cm}1\leq i < k\leq J. \end{eqnarray} We describe a method of RUL estimation \citep[see][]{cea15}, which is used in \pkg{hhsmm} package. First, we should compute the probabilities in \eqref{deltabar}, using the Viterbi or smoothing algorithm. Two different methods are used to obtain point and interval estimates of the RUL in the \pkg{hhsmm} package. The first method (the option \code{confidence = "mean"} in the \code{predict} function of the \pkg{hhsmm} package) is based on the method described in \cite{cea15}. This method computes the average time in the current state as follows \begin{eqnarray}\label{49__eq} \tilde{d}_{avg}(\hat{s}_t^*) = \sum_{j=1}^{J}\Big(\mu_{d_j}-\hat{d}_t(j)\Big)\bar{\delta}_t(j), \end{eqnarray} where $\mu_{d_i}=\sum_{u=1}^{M_j} u d_i(u)$ is the expected value of the duration variable in state $j$, and $\hat{d}_t(j)$ is the estimated states duration, computed as follows \citep{a04} $$\hat{d}_t(j) = \hat{d}_{t-1}(j) \bar{\delta}_t(j), \quad t = 2, \ldots , M_j, \quad \hat{d}_{1}(j)=1, \; j = 1,\ldots, J. $$ In order to obtain a confidence interval for the RUL, \cite{cea15} also computed the standard deviation of the duration variable in state $j$, $\sigma_{d_j}$, and \begin{equation}\label{50__eq} \tilde{d}_{low}(\hat{s}_t^*) = \sum_{j=1}^{J}\Big(\mu_{d_j}-\hat{d}_t(j)-\sigma_{d_j}\Big)\bar{\delta}_t(j), \end{equation} \begin{equation}\label{51__eq} \tilde{d}_{up}(\hat{s}_t^*) =\sum_{j=1}^{J}\Big(\mu_{d_j}-\hat{d}_t(j)+\sigma_{d_j}\Big)\bar{\delta}_t(j) \end{equation} However, to obtain a confidence interval of the specified level $1-\gamma \in (0,1)$, we have corrected equations \eqref{50__eq} and \eqref{51__eq} in the \pkg{hhsmm} package as follows \begin{equation}\label{502__eq} \tilde{d}_{low}(\hat{s}_t^*) = \sum_{j=1}^{J}\Big(\mu_{d_j}-\hat{d}_t(j)-z_{1-\gamma/2}\sigma_{d_j}\Big)\bar{\delta}_t(j), \end{equation} \begin{equation}\label{512__eq} \tilde{d}_{up}(\hat{s}_t^*) =\sum_{j=1}^{J}\Big(\mu_{d_j}-\hat{d}_t(j)+z_{1-\gamma/2}\sigma_{d_j}\Big)\bar{\delta}_t(j), \end{equation} where $z_{1-\gamma/2}$ is the ${1-\gamma/2}$ quantile of the standard normal distribution. The probability of the next state is obtained by multiplying the transition matrix by the current state probability as follows \begin{equation}\label{52__eq} \bar{\delta}_{next} = \Big[\bar{\delta}_{t+\tilde{d}}(j)\Big]_{1\leq j\leq J}=\Big(P\Big)^T\bar{\delta}_t \end{equation} while the maximum a posteriori estimate of the next state is calculated as \begin{equation}\label{53__eq} \hat{s}_{next}^*=\hat{s}_{t+\tilde{d}}^*=\arg\max_{1\leq j \leq J}\bar{\delta}_{t+\tilde{d}}(j) \end{equation} If $\hat{S}_{t+\tilde{d}}(j)$ coincides with the failure state $J$, the failure will happen after the remaining time at the current state is over and the average estimation of the failure time is $\tilde{D}_{avg}=\tilde{d}_{avg}(\hat{s_t}^*)$, with the lower and upper bounds $\tilde{D}_{low}=\tilde{d}_{low}(\hat{s_t}^*)$ and $\tilde{D}_{up}=\tilde{d}_{up}(\hat{s_t}^*)$, respectively, otherwise, the sojourn time of the next state is calculated as \begin{equation}\label{54__eq} \tilde{d}_{avg}\Big(\hat{S}_{t+\tilde{d}}^*\Big)=\sum_{j=1}^{J}\mu_{d_j}\bar{\delta}_{t+\tilde{d}}(j) \end{equation} \begin{equation}\label{55__eq} \tilde{d}_{low}\Big(\hat{S}_{t+\tilde{d}}^*\Big)=\sum_{j=1}^{J}\Big(\mu_{d_j}-z_{1-\gamma/2}\sigma_{d_j}\Big)\bar{\delta}_{t+\tilde{d}}(j) \end{equation} \begin{equation}\label{56__eq} \tilde{d}_{up}\Big(\hat{S}_{t+\tilde{d}}^*\Big)=\sum_{j=1}^{J}\Big(\mu_{d_j}+z_{1-\gamma/2}\sigma_{d_j}\Big)\bar{\delta}_{t+\tilde{d}}(j) \end{equation} This procedure is iterated until the failure state is encountered in the prediction of the next state. The estimate of the RUL is then calculated by summing all the aforementioned estimated remaining times, as follows \begin{eqnarray}\label{57__eq} \tilde{D}_{avg} = \sum\tilde{d}_{avg}, \quad \tilde{D}_{low} = \sum\tilde{d}_{low}, \quad \tilde{D}_{up} = \sum\tilde{d}_{up} \end{eqnarray} In the second method (the option \code{confidence = "max"} in the \code{predict} function of the \pkg{hhsmm} package), we relax the normal assumption and use the mode and quantiles of the sojourn time distribution, by replacing the mean $\mu_{d_j}$ with the mode $m_{d_j} = \arg\max_{1\leq u \leq M_j} d_j(u)$ and replacing $-z_{1-\gamma/2}\sigma_{d_j}$ with $\min\{\nu;\; \sum_{u=1}^\nu d_j(u) \leq \gamma/2\}$ and $+z_{1-\gamma/2}\sigma_{d_j}$ with $M_j - \min\{\nu;\; \sum_{u=\nu}^{M_j} d_j(u) \leq \gamma/2\}$ in equations \eqref{50__eq}, \eqref{502__eq}, \eqref{512__eq}, \eqref{54__eq}, \eqref{55__eq} and \eqref{56__eq}. \subsection{Continuous time sojourn distributions}\label{cts} Since the measurements of the observations are always preformed on discrete time units (assumed to be positive integers), the sojourn time probabilities of the sojourn time distribution with probability density function $g_j$, in state $j$, is obtained as follows \begin{eqnarray}\label{cont} d_j(u) &=& P(S_{t+u+1} \neq j,\;S_{t+u-\nu} = j, \;\nu = 0, \ldots,u-2|S_{t+1} = j ,\; S_t \neq j)\nonumber\\ &=& \int_{u-1}^{u} g_j(y) \; dy \big/ \int_{0}^{M_j} g_j(y) \; dy, \quad j=1,\ldots,J,\quad u=1,\ldots,M_j. \end{eqnarray} { Almost all flexible continuous distributions with positive domain, which are used as the lifetime distribution, including gamma, weibull, log-normal, Birnbaum–Saunders, inverse-gamma, Fr\'{e}chet, Gumbel and many other distributions, might be used as the continuous-time sojourn distribution.} Some of the continuous sojourn time distributions, included in the \pkg{hhsmm} package, are as follows: \begin{itemize} \item \textbf{Gamma sojourn}: The gamma sojourn time density functions are $$g_j(y) = \frac{y^{\alpha_j-1}e^{-\frac{y}{\beta_j}}}{\beta_j^{\alpha_j}\Gamma(\alpha_j)}, \quad j=1,\ldots,J,$$ which result in $$d_j(u) = \int_{u-1}^{u} y^{\alpha_j-1}e^{-\frac{y}{\beta_j}} \; dy \big/ \int_{0}^{M_j} y^{\alpha_j-1}e^{-\frac{y}{\beta_j}} \; dy $$ \item \textbf{Weibull sojourn}: The Weibull sojourn time density functions are $$g_j(y) = \frac{\alpha_j}{\beta_j} \left(\frac{y}{\beta_j}\right)^{\alpha_j-1} \exp\left\{- \left(\frac{y}{\beta_j}\right)^{\alpha_j}\right\}, \quad j=1,\ldots,J,$$ which result in $$d_j(u) = \int_{u-1}^{u} y^{\alpha_j-1} \exp\left\{- \left(\frac{y}{\beta_j}\right)^{\alpha_j}\right\} \; dy \left./ \int_{0}^{M_j} y^{\alpha_j-1} \exp\left\{- \left(\frac{y}{\beta_j}\right)^{\alpha_j}\right\} \; dy\right. $$ \item \textbf{log-normal sojourn}: The log-normal sojourn time density functions are $$g_j(y) = \frac{1}{\sqrt{2\pi}\sigma_j} \exp\left\{\frac{-1}{2\sigma_j^2}(\log y - \mu_j)^2\right\}, \quad j=1,\ldots,J,$$ which result in $$d_j(u) = \int_{u-1}^{u} \exp\left\{\frac{-1}{2\sigma_j^2}(\log y - \mu_j)^2\right\} \; dy \left./ \int_{0}^{M_j} \exp\left\{\frac{-1}{2\sigma_j^2}(\log y - \mu_j)^2\right\} \; dy\right. $$ \end{itemize} \subsection{Other features of the package} There are some other features included in the \pkg{hhsmm} package, which are listed below: \begin{itemize} \item \code{dmixmvnorm}: Computes the probability density function of a mixture of multivariate normals for a specified observation vector, a specified state, and a specified model's parameters \item \code{mixmvnorm\_mstep}: The M step function of the EM algorithm for the mixture of multivariate normals as the emission distribution using the observation matrix and the estimated weight vectors \item \code{rmixmvnorm}: Generates a vector of observations from mixture multivariate normal distribution in a specified state and using the parameters of a specified model \item \code{train\_test\_split}: Splits the data sets to train and test subsets with an option to right trim the sequences \item \code{lagdata}: Creates lagged time series of a data \item \code{score}: Computes the score (log-likelihood) of new observations using a trained model \item \code{homogeneity}: Computes maximum homogeneity of two state sequences \item \code{hhsmmdata}: Converts a matrix of data and its associated vector of sequence lengths to a data list of class \code{"hhsmmdata"} \end{itemize} { \section{Real data Analysis}\label{rdas} To examine the performance of the \pkg{hhsmm} package, we consider the analysis of two real data sets. The first data set is the Spain energy market data set from \pkg{MSwM} package and the second one is the Commercial Modular Aero-Propulsion System Simulation (CMAPSS) data set from the \pkg{CMAPSS} data package. \subsection{Spain energy market data set} The Spain energy market data set \citep{fea09} contains the price of the energy in Spain with other economic data. The daily data is from January 1, 2002 to October 31, 2008, during working days (Monday to Friday). This data set is available in \pkg{MSwM} package (\url{https://cran.r-project.org/package=MSwM}), in a data-frame named \code{energy} and contains 1785 observations on 7 variables: \code{Price} (Average price of energy in Cent/kwh), \code{Oil} (Oil price in Euro/barril), \code{Gas} (Gas price in Euro/MWh), \code{Coal} (Coal price in Euro/T), \code{EurDol} (Exchange rate between Dolar-Euro in USD-Euro), \code{Ibex35} (Ibex 35 index divided by one thousand) and \code{Demand} (Daily demand of energy in GWh). This data-set is also analysed in \cite{fea09}, using Markov switching regression model. The objective of the analysis is to predict the response variable \code{Price}, based on the information in other variables (covariates). In order to analyze the \code{energy} data set, we load it from \pkg{MSwM} package, and transform it into a \code{"hhsmmdata"} using \code{hhsmmdata} function as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| library(MSwM) |\bf \color{lightgray} R$>$| data(energy) |\bf \color{lightgray} R$>$| energy.hmm = hhsmmdata(energy) |\bf \color{lightgray} R$>$| p = ncol(energy.hmm$x) - 1 \end{lstlisting} We consider a two-state model. Here, we consider a fully Markovian model and thus, we let \code{semi <- rep(FALSE, J)}. Although an optimal value of $K$ might be obtained by minimizing the AIC, BIC or even a cross-validation error, we set the degree of the B-splines to $K=20$ for this analysis, for the sake of briefness. \begin{lstlisting} |\bf \color{lightgray} R$>$| K <- 20 |\bf \color{lightgray} R$>$| J <- 2 |\bf \color{lightgray} R$>$| initial <- rep(1/J, J) |\bf \color{lightgray} R$>$| semi <- rep(FALSE, J) \end{lstlisting} First, we make an initial clustering for the data set. Again, we point out that we should consider \code{nmix = NULL} and \code{regress = TRUE} in the \code{initial\_cluster} function. \begin{lstlisting} |\bf \color{lightgray} R$>$| clus = initial_cluster(train = energy.hmm, nstate = J, |\bf \color{lightgray} +| nmix = NULL, ltr = FALSE, final.absorb = FALSE, |\bf \color{lightgray} +| verbose = TRUE, regress = TRUE) \end{lstlisting} To initialize the model, we use the \code{initialize\_model} function, with arguments \code{mstep = additive\_reg\_mstep}, \code{dens.emission = dnorm\_additive\_reg} and \code{control = list(K = K)}. \begin{lstlisting} |\bf \color{lightgray} R$>$| initmodel = initialize_model(clus = clus, mstep = |\bf \color{lightgray} +| additive_reg_mstep, dens.emission = dnorm_additive_reg, |\bf \color{lightgray} +| sojourn = NULL, semi = semi, M = max(energy.hmm$N), |\bf \color{lightgray} +| verbose = FALSE, control = list(K = K)) \end{lstlisting} Next, we fit the model by calling the \code{hhsmmfit} function as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = energy.hmm, model = initmodel, |\bf \color{lightgray} +| mstep = additive_reg_mstep, M = max(energy.hmm$N), |\bf \color{lightgray} +| control = list(K = K)) \end{lstlisting} Now, we can obtain the response predictions using the \code{addreg\_hhsmm\_predict} function as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| pred2 <- addreg_hhsmm_predict(fit1, energy.hmm$x[, 2:(p+1)], K) \end{lstlisting} To visualize the results, first, we add the predicted states to a \code{hhsmmdata} set made by the response variable. Then, we plot it using the \code{plot.hhsmmspec} function, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| s = fit1$yhat |\bf \color{lightgray} R$>$| newdata = hhsmmdata(x = as.matrix(energy.hmm$x[, 1]), |\bf \color{lightgray} +| N = energy.hmm$N) |\bf \color{lightgray} R$>$| newdata$s = s |\bf \color{lightgray} R$>$| plot(newdata) \end{lstlisting} The resulting plot is presented in Figure \ref{spain1}. The predicted states are shown by two different colors on the horizontal axis. \begin{figure} \centerline{\includegraphics[width = 15cm]{spain1.png}} \caption{Spain energy data and its estimated states.}\label{spain1} \end{figure} In the second plot, we want to show the separate predictions for the two states along with the true response values. To do this, we use the following lines of codes. \begin{lstlisting} |\bf \color{lightgray} R$>$| col.states = fit1$yhat |\bf \color{lightgray} R$>$| col.states[col.states == 1] = 'goldenrod2' |\bf \color{lightgray} R$>$| col.states[col.states == 2] = 'green4' |\bf \color{lightgray} R$>$| plot(energy.hmm$x[, 1], col = col.states, |\bf \color{lightgray} +| pch = 16, xlab = "Time", ylab = "Energy Price") |\bf \color{lightgray} R$>$| time = 1:length(pred2[[1]]) |\bf \color{lightgray} R$>$| lines(pred2[[1]] ~ time, col = "red", lwd = 0.25) |\bf \color{lightgray} R$>$| lines(pred2[[2]] ~ time, col = "blue", lwd = 0.25) \end{lstlisting} The resulting plot is presented in Figure \ref{spain2}. The predictions associated with the two states' are shown by blue and red lines. \begin{figure} \centerline{\includegraphics[width = 15cm]{spain2.png}} \caption{Two-state prediction of the energy price based on the other variables.}\label{spain2} \end{figure} To visualize the nonparametric regression curves, we consider only the covariate \code{Oil Price}, which is the second column of the \code{energy} data set. In the following, we initialize and fit the model to the data set containing only the first column as the response and the second column as the covariate. \begin{lstlisting} |\bf \color{lightgray} R$>$| energy.hmm2 = hhsmmdata(energy[ , 1:2]) |\bf \color{lightgray} R$>$| clus = initial_cluster(train = energy.hmm2, nstate = J, nmix = NULL, |\bf \color{lightgray} +| regress = TRUE) |\bf \color{lightgray} R$>$| initmodel = initialize_model(clus = clus, mstep = additive_reg_mstep, |\bf \color{lightgray} +| dens.emission = dnorm_additive_reg, sojourn = NULL, semi = semi, |\bf \color{lightgray} +| M = max(energy.hmm$N)) |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = energy.hmm2, model = initmodel, |\bf \color{lightgray} +| mstep = additive_reg_mstep, M = max(energy.hmm2$N)) \end{lstlisting} Again, we obtain the response predictions as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| pred <- addreg_hhsmm_predict(fit1, energy.hmm2$x[, 2], K) \end{lstlisting} Now, we can plot the corresponding graph as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| col.states = fit1$yhat |\bf \color{lightgray} R$>$| col.states[col.states == 1] = 'goldenrod2' |\bf \color{lightgray} R$>$| col.states[col.states == 2] = 'green4' |\bf \color{lightgray} R$>$| plot(energy.hmm$x[, 1] ~ energy.hmm$x[, 2], col = col.states, |\bf \color{lightgray} +| pch = 16, xlab = "Oil Price", ylab = "Energy Price", lwd = 2, |\bf \color{lightgray} +| main = "Case p = 1") |\bf \color{lightgray} R$>$| lines(pred[[1]][order(energy.hmm$x[, 2])] ~ sort(energy.hmm$x[, 2]), |\bf \color{lightgray} +| col = 1, lwd = 2) |\bf \color{lightgray} R$>$| lines(pred[[2]][order(energy.hmm$x[, 2])] ~ sort(energy.hmm$x[, 2]), |\bf \color{lightgray} +| col = 1, lwd = 2) |\bf \color{lightgray} R$>$| text(30, 7, "State 1", cex = 1.5) |\bf \color{lightgray} R$>$| text(60, 3, "State 2", cex = 1.5) \end{lstlisting} The resulting plot is presented in Figure \ref{spain3}. As one can see from Figure \ref{spain3}, the two curves are well-fitted to the data points. \begin{figure} \centerline{\includegraphics[width = 15cm]{spain3.png}} \caption{Prediction curves for regime swithching nonparametric regression model of energy price on oil price.}\label{spain3} \end{figure} To compare the prediction performance of the above-mentioned model with the simple (single state) additive regression model, we can use the \code{additive\_reg\_mstep} function, with a weight matrix with a single column and all components equal to 1. \begin{lstlisting} |\bf \color{lightgray} R$>$| n = energy.hmm$N |\bf \color{lightgray} R$>$| wt = matrix(rep(1, n), n, 1) |\bf \color{lightgray} R$>$| emission = additive_reg_mstep(energy.hmm$x, wt, |\bf \color{lightgray} +| control = list(K = K)) |\bf \color{lightgray} R$>$| tmpfit = list(model = list(J = 1, parms.emission = emission)) |\bf \color{lightgray} R$>$| pred1 <- addreg_hhsmm_predict(tmpfit, energy.hmm$x[,2:(p+1)], K) \end{lstlisting} We compute the sum of squared errors (SSE) of the two competitive models as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| sse1 = sum((pred1 - energy.hmm$x[, 1])^2) |\bf \color{lightgray} R$>$| sse2 = sum(sapply(1:J, function(j) |\bf \color{lightgray} +| sum((pred2[[j]][s == j] - energy.hmm$x[s == j, 1]) ^ 2))) \end{lstlisting} We plot the predictions of two competitive models by adding their SSE values to plots, as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| par(mfrow = c(1, 2)) |\bf \color{lightgray} R$>$| plot(energy.hmm$x[, 1], type = "l", xlab = 'Time', |\bf \color{lightgray} +| ylab = 'Energy Price', main = "Regime switching |\bf \color{lightgray} +| additive regression", lwd = 2) |\bf \color{lightgray} R$>$| time = 1:length(pred2[[1]]) |\bf \color{lightgray} R$>$| predict = (s == 1) * pred2[[1]] + (s == 2) * pred2[[2]] |\bf \color{lightgray} R$>$| lines(predict~ time, col = "red", lwd = 2) |\bf \color{lightgray} R$>$| lines(1:energy.hmm$N, rep(0.5, energy.hmm$N), |\bf \color{lightgray} +| col = s, lwd = 2, type = "h", pch = 16) |\bf \color{lightgray} R$>$| text(500, 9, paste("SSE = ", round(sse2, 2))) |\bf \color{lightgray} R$>$| plot(energy.hmm$x[, 1], type = "l", xlab = 'Time', |\bf \color{lightgray} +| ylab = 'Energy Price', main = "Simple additive |\bf \color{lightgray} +| regression", lwd = 2) |\bf \color{lightgray} R$>$| time = 1:length(pred1) |\bf \color{lightgray} R$>$| lines(pred1 ~ time, col = "blue", lwd = 2) |\bf \color{lightgray} R$>$| text(500, 9, paste("SSE = ", round(sse1, 2))) \end{lstlisting} \begin{figure} \centerline{\includegraphics[width = 15cm]{compare.png}} \caption{Comparision with simple additive regression.}\label{compare} \end{figure} The resulting plot is given in Figure \ref{compare}. As one can see from Figure \ref{compare}, the two-state regime switching additive regression model, performs much better than the simple (single state) additive regression model. } \subsection{RUL estimation for the C-MAPSS data set}\label{s6} The turbofan engine data is from the Prognostic Center of Excellence (PCoE) of NASA Ames Research Center, which is simulated by the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS). Only 14 out of 21 variables are selected, by a method mentioned by \cite{lea15} and are included in the \pkg{hhsmm} package. A list of all 21 variable, as well as a description and the selected 14 variables are tabulated in Table \ref{tabl2}. The \code{train} an \code{test} lists are of class \code{"hhsmmdata"}. The original data set contains the subsets \code{FD001}-\code{FD004}, which are concatenated in the \code{CMAPSS} data set. These sets are described in Table \ref{tabl1}. This table is presented in \code{CMAPSS\$subsets} in the \code{CMAPSS} data set. \begin{table}[h] \centering \caption{C-MAPSS data set overview}\label{tabl1} \begin{tabular}{c c c c c} \hline \hline &FD001 & FD002 & FD003 & FD004 \\ \hline Training Units & 100 & 260 & 100 & 249\\ Testing Units & 100 & 259 & 100 & 248\\ \hline \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{Sensor description of the C-MAPSS data set \citep[see][]{lea15}} \label{tabl2} \begin{tabular}{c c c c c} \hline \hline No. & Symbol & Description & Units & Included in the package? \\ \hline 1 & T2 & Total Temperature at fan inlet & $^o$R & $\boldsymbol\times$\\ 2 & T24 & Total temperature at LPC outlet & $^o$R &\checkmark \\ 3 & T30 & Total temperature at HPC outlet & $^o$R &\checkmark \\ 4 & T50 & Total temperature LPT outlet & $^o$ R&\checkmark \\ 5 & P2 & Pressure at fan inlet & psia & $\boldsymbol\times$\\ 6 & P15 & Total pressure in bypass-duct & psia & $\boldsymbol\times$\\ 7 & P30 & Total pressure at HPC outlet & psia &\checkmark\\ 8 & Nf & Physical fan speed & rpm & \checkmark \\ 9 & Nc & Physical core speed & rpm & \checkmark \\ 10 & Epr & Engine pressure ratio & - & $\boldsymbol\times$\\ 11 & Ps30 & Static pressure at HPC outlet & psia & \checkmark\\ 12 & Phi & Ratio of fuel flow to Ps30 & pps/psi & \checkmark\\ 13 & NRf & Corrected fan speed & rpm & \checkmark\\ 14 & NRc & Corrected core speed & rpm & \checkmark\\ 15 & BPR & Bypass ratio & $-$ & \checkmark\\ 16 & farB & Burner fuel-air ratio & $-$ & $\boldsymbol\times$ \\ 17 & htBleed &Bleed enthalpy & $-$ & \checkmark \\ 18 & NF dmd & Demanded fan speed & rpm & $\boldsymbol\times$\\ 19 & PCNR dmd & Demanded corrected fan speed & rpm & $\boldsymbol\times$\\ 20 & W31 & HPT coolant bleed & lbm/s & \checkmark \\ 21 & W32 & LPT coolant bleed & lbm/s & \checkmark \\ \hline \hline \end{tabular} \end{table} We load the data set and extract the \code{train} and \code{test} sets as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| data(CMAPSS) |\bf \color{lightgray} R$>$| train = CMAPSS$train |\bf \color{lightgray} R$>$| test = CMAPSS$test \end{lstlisting} To visualize the data set, we plot only the first sequence of the \code{train} set. To do this, this sequence is converted to a data set of class \code{"hhsmmdata"}, using the function \code{hhsmmdata} as follows. The plots are presented in Figure \ref{tsp}. \begin{lstlisting} |\bf \color{lightgray} R$>$| train1 = hhsmmdata(x = train$x[1:train$N[1],], N = train$N[1]) |\bf \color{lightgray} R$>$| plot(train1) \end{lstlisting} Initial clustering of the states and mixture components is obtained by the \code{initial\_cluster} function. Since, the suitable reliability model for the \code{CMAPSS} data set is a left to right model, the option \code{ltr = TRUE} is used. Also, since the engines are failed in the final time of each sequence, the final time of each sequence is considered the absorbing state (final state of the left to right model). This assumption is given to the model by the option \code{final.absorb = TRUE}. The number of states is assumed to be 5 states, which could be one healthy state, 3 levels of damage state, and one failure state in the reliability model. The number of mixture components is computed automatically using the option \code{nmix = "auto"}. \begin{lstlisting} |\bf \color{lightgray} R$>$| J = 5 |\bf \color{lightgray} R$>$| clus = initial_cluster(train = train, nstate = J, nmix = "auto", |\bf \color{lightgray} +| ltr = TRUE, final.absorb = TRUE, verbose = TRUE) Within sequence clustering ... clustering [=========================] 10 State 1 Between sequence clustering ... Automatic determination of the number of mixture components ... State 2 Between sequence clustering ... Automatic determination of the number of mixture components ... State 3 Between sequence clustering ... Automatic determination of the number of mixture components ... State 4 Between sequence clustering ... Automatic determination of the number of mixture components ... State 5 Between sequence clustering ... Automatic determination of the number of mixture components ... \end{lstlisting} Now, we initialize the model using the \code{initialize\_model} function. The sojourn time distribution is assumed to be \code{"gamma"} distribution. \begin{figure}[ht] \centerline{\includegraphics[width=4.5cm]{p1.png}\includegraphics[width=4.5cm]{p2.png}\includegraphics[width=4.5cm]{p3.png}} \centerline{\includegraphics[width=4.5cm]{p4.png}\includegraphics[width=4.5cm]{p5.png}\includegraphics[width=4.5cm]{p6.png}} \centerline{\includegraphics[width=4.5cm]{p7.png}\includegraphics[width=4.5cm]{p8.png}\includegraphics[width=4.5cm]{p9.png}} \centerline{\includegraphics[width=4.5cm]{p10.png}\includegraphics[width=4.5cm]{p11.png}\includegraphics[width=4.5cm]{p12.png}} \centerline{\includegraphics[width=4.5cm]{p13.png}\includegraphics[width=4.5cm]{p14.png}} \caption{The time series plot of 14 variables of the first sequence of the \code{train} set for the \code{CMAPSS} data set. }\label{tsp} \end{figure} \begin{lstlisting} |\bf \color{lightgray} R$>$| initmodel = initialize_model(clus = clus, sojourn = "gamma", |\bf \color{lightgray} +| M = max(train$N), verbose = TRUE) Intitial estimation .... State 1 estimation Mixture component 1 estimation Mixture component 2 estimation Mixture component 3 estimation Mixture component 4 estimation Mixture component 5 estimation ... State 5 estimation Mixture component 1 estimation Mixture component 2 estimation Mixture component 3 estimation Mixture component 4 estimation Mixture component 5 estimation Initializing model ... \end{lstlisting} As a result, the initial estimates of the parameters of the sojourn time distribution and initial estimates of the transition probability matrix and the initial probability vector are obtained as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| initmodel$sojourn $shape [1] 10.753944 11.222101 5.617826 8.386559 0.000000 $scale [1] 0.9138905 0.8885851 1.9869251 23.1578163 0.0000000 $type [1] "gamma" |\bf \color{lightgray} R$>$| initmodel$transition [,1] [,2] [,3] [,4] [,5] [1,] 0 0.85 0.0500000 0.05000000 0.05000000 [2,] 0 0.00 0.8947368 0.05263158 0.05263158 [3,] 0 0.00 0.0000000 0.94444444 0.05555556 [4,] 0 0.00 0.0000000 0.00000000 1.00000000 [5,] 0 0.00 0.0000000 0.00000000 1.00000000 |\bf \color{lightgray} R$>$| initmodel$init [1] 1 0 0 0 0 \end{lstlisting} Now, we fit the HHSMM model, using \code{hhsmmfit} function. The option \code{lock.init=TRUE} is a good option for a right to left model since the initial state is the first state (healthy system state in the reliability model) in such situations with probability 1. Graphical visualization of such a model is given in Figure \ref{rtl}. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1 = hhsmmfit(x = train, model = initmodel, M = max(train$N), |\bf \color{lightgray} +| par = list(lock.init = TRUE)) iteration: 1 log-likelihood = -1813700 iteration: 2 log-likelihood = -1381056 iteration: 3 log-likelihood = -1470325 iteration: 4 log-likelihood = -1471163 iteration: 5 log-likelihood = -1450113 iteration: 6 log-likelihood = -1429393 .... iteration: 79 log-likelihood = -1380078 iteration: 80 log-likelihood = -1379937 iteration: 81 log-likelihood = -1379810 AIC = 2770213 BIC = 2823105 \end{lstlisting} \begin{figure} \centerline{\includegraphics[width = 15cm]{rtl.png}} \caption{Graphical representation of the reliability left to right model.}\label{rtl} \end{figure} The estimates of the transition probability matrix, the sojourn time probability matrix, the initial probability vector, and the AIC and BIC of the model, are extracted as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| fit1$model$transition [,1] [,2] [,3] [,4] [,5] [1,] 0 0.1481077 0.7954866 0.05640566 0 [2,] 0 0.0000000 0.0000000 1.00000000 0 [3,] 0 0.0000000 0.0000000 0.00000000 1 [4,] 0 0.0000000 0.0000000 0.00000000 1 [5,] 0 0.0000000 0.0000000 0.00000000 1 |\bf \color{lightgray} R$>$| head(fit1$model$d) [,1] [,2] [,3] [,4] [,5] [1,] 3.302753e-19 5.011609e-12 2.064632e-10 1.101749e-07 1e-100 [2,] 4.526557e-16 2.694189e-10 1.604557e-08 1.111672e-06 1e-100 [3,] 2.957092e-14 2.545406e-09 1.854185e-07 3.737021e-06 1e-100 [4,] 5.455948e-13 1.178541e-08 9.730204e-07 8.376700e-06 1e-100 [5,] 5.017141e-12 3.736808e-08 3.346571e-06 1.528679e-05 1e-100 [6,] 2.975970e-11 9.388985e-08 8.869356e-06 2.464430e-05 1e-100 |\bf \color{lightgray} R$>$| fit1$model$init [1] 1 0 0 0 0 |\bf \color{lightgray} R$>$| fit1$AIC [1] 2770213 |\bf \color{lightgray} R$>$| fit1$BIC [1] 2823105 \end{lstlisting} We can plot the estimated gamma sojourn probability density functions as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 5 |\bf \color{lightgray} R$>$| MM = max(train$N) * 1.2 |\bf \color{lightgray} R$>$| f1 <- function(x) dgamma(x, shape = fit1$model$sojourn$shape[1], |\bf \color{lightgray} +| scale = fit1$model$sojourn$scale[1]) |\bf \color{lightgray} R$>$| plot(f1, 0, MM, type = "l", xlab = "Time", ylab = |\bf \color{lightgray} +| "Sojourn time gamma probability density function") |\bf \color{lightgray} R$>$| leg = "state 1" |\bf \color{lightgray} R$>$| for(j in 2:(J-1)){ |\bf \color{lightgray} +| f <- function(x) dgamma(x, shape = fit1$model$sojourn$shape[j], |\bf \color{lightgray} +| scale = fit1$model$sojourn$scale[j]) |\bf \color{lightgray} +| xs <- seq(1, MM, 0.1) |\bf \color{lightgray} +| ys <- sapply(xs, f) |\bf \color{lightgray} +| lines(xs, ys, col = j) |\bf \color{lightgray} +| leg = c(leg, paste("state", j)) |\bf \color{lightgray} +| } |\bf \color{lightgray} R$>$| legend(2*MM/3, max(ys), leg, lty = rep(1, J - 1), col = 1:(J - 1)) \end{lstlisting} The resulted plot is shown in Figure \ref{gamfit}. \begin{figure} \centerline{\includegraphics[width=10cm]{plot1.png}} \caption{The estimated gamma sojourn time density functions.}\label{gamfit} \end{figure} Now, we obtain the estimates of the RULs, as well as the confidence intervals, by four different methods as follows. These four methods are obtained by combination of two different methods \code{"viterbi"} and \code{"smoothing"} for the prediction and two different methods \code{"mean"} and \code{"max"} for RUL estimation and confidence interval computation. The option \code{"viterbi"} uses the Viterbi algorithm to find the most likely state sequence, while the option \code{"smoothing"} uses the estimates of the state probabilities, using the emission probabilities of the \code{test} data set. On the other hand, in the \code{"mean"} method, the mean sojourn time and its standard deviation are used for estimation and confidence interval, while in the \code{"max"} method, the maximum probability sojourn time and its quantiles are used (see Section \ref{s4}). \begin{lstlisting} |\bf \color{lightgray} R$>$| pp1 = predict(fit1, test, method = "viterbi", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "mean") |\bf \color{lightgray} R$>$| pp2 = predict(fit1, test, method = "viterbi", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "max") |\bf \color{lightgray} R$>$| pp3 = predict(fit1, test, method = "smoothing", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "mean") |\bf \color{lightgray} R$>$| pp4 = predict(fit1, test, method = "smoothing", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "max") \end{lstlisting} As a competitor, we fit the hidden Markov model (HMM) to the data set, which means that we consider all states to be Markovian. To de this, we try fitting the HMM to the \code{train} set, using the option \code{semi = rep(FALSE,J)} of the \code{hhsmmfit} function of the \pkg{hhsmm} package. We use the same initial values of the parameters, while we need to use \code{dens.emission = dmixmvnorm} in the \code{hhsmmspec} function, and set the mixture components probabilities equal to 1 (for one mixture component in each state). \begin{lstlisting} |\bf \color{lightgray} R$>$| J <- 5 |\bf \color{lightgray} R$>$| init0 <- c(1, 0, 0, 0, 0) |\bf \color{lightgray} R$>$| P0 <- initmodel$transition |\bf \color{lightgray} R$>$| b0 = list(mu = list(), sigma = list()) |\bf \color{lightgray} R$>$| for(j in 1:J){ |\bf \color{lightgray} +| b0$mu[[j]] <- Reduce('+', initmodel$parms.emission$mu[[j]]) / J |\bf \color{lightgray} +| b0$sigma[[j]] <- Reduce('+', |\bf \color{lightgray} +| initmodel$parms.emission$sigma[[j]]) / J |\bf \color{lightgray} +| } |\bf \color{lightgray} R$>$| for(j in 1:J) b0$mix.p[[j]] = 1 |\bf \color{lightgray} R$>$| initmodel <- hhsmmspec(init = init0, transition = P0, |\bf \color{lightgray} +| parms.emission = b0, dens.emission = dmixmvnorm, |\bf \color{lightgray} +| semi = rep(FALSE, J)) |\bf \color{lightgray} R$>$| fit3 = hhsmmfit(train, initmodel , mstep = mixmvnorm_mstep) iteration: 1 log-likelihood = -110772073 iteration: 2 log-likelihood = -1228418 iteration: 3 log-likelihood = -1246009 iteration: 4 log-likelihood = -1260494 iteration: 5 log-likelihood = -1284620 iteration: 6 log-likelihood = -1329087 iteration: 7 log-likelihood = -1506447 iteration: 8 log-likelihood = -2650675 iteration: 9 log-likelihood = -2652023 iteration: 10 log-likelihood = -2650571 iteration: 11 log-likelihood = -2650527 AIC = 5303215 BIC = 5313999 \end{lstlisting} For the fitted HMM model, we estimate the RULs using the aforementioned options. \begin{lstlisting} |\bf \color{lightgray} R$>$| pp5 = predict(fit3, test, method = "viterbi", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "mean") |\bf \color{lightgray} R$>$| pp6 = predict(fit3, test, method = "viterbi", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "max") |\bf \color{lightgray} R$>$| pp7 = predict(fit3, test, method = "smoothing", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "mean") |\bf \color{lightgray} R$>$| pp8 = predict(fit3, test, method = "smoothing", |\bf \color{lightgray} +| RUL.estimate = TRUE, confidence = "max") \end{lstlisting} Now, we use the real values of the RULs, stored in \code{test\$RUL} to compute the coverage probabilities of the confidence intervals of HHSMM and HMM models. \begin{lstlisting} |\bf \color{lightgray} R$>$| mean((test$RUL >= pp1$RUL.low) & (test$RUL <= pp1$RUL.up)) [1] 0.7963225 |\bf \color{lightgray} R$>$| mean((test$RUL >= pp2$RUL.low) & (test$RUL <= pp2$RUL.up)) [1] 0.54314 |\bf \color{lightgray} R$>$| mean((test$RUL >= pp3$RUL.low) & (test$RUL <= pp3$RUL.up)) [1] 0.864215 |\bf \color{lightgray} R$>$| mean((test$RUL >= pp4$RUL.low) & (test$RUL <= pp4$RUL.up)) [1] 0.7213579 |\bf \color{lightgray} R$>$| mean((test$RUL >= pp5$RUL.low) & (test$RUL <= pp5$RUL.up)) [1] 0 |\bf \color{lightgray} R$>$| mean((test$RUL >= pp6$RUL.low) & (test$RUL <= pp6$RUL.up)) [1] 0 |\bf \color{lightgray} R$>$| mean((test$RUL >= pp7$RUL.low) & (test$RUL <= pp7$RUL.up)) [1] 0 |\bf \color{lightgray} R$>$| mean((test$RUL >= pp8$RUL.low) & (test$RUL <= pp8$RUL.up)) [1] 0 \end{lstlisting} As one can see from the above results, the HHSMM model's coverage probabilities are much better than the HMM ones. To visualize the results of RUL estimation, we plot the RUL estimates and RUL bounds as follows. \begin{lstlisting} |\bf \color{lightgray} R$>$| par(mfrow = c(2,2)) |\bf \color{lightgray} R$>$| plot(test$RUL[order(pp1$RUL)], |\bf \color{lightgray} +| ylim = c(min(pp1$RUL.low), max(pp1$RUL.up)), |\bf \color{lightgray} +| pch=16, col = "green", xlab = "unit", |\bf \color{lightgray} +| ylab = "RUL", main = "Viterbi-mean method") |\bf \color{lightgray} R$>$| lines(pp1$RUL.low[order(pp1$RUL)], lty = 2, col = "red") |\bf \color{lightgray} R$>$| lines(pp1$RUL.up[order(pp1$RUL)], lty = 2, |\bf \color{lightgray} +| col = "red") |\bf \color{lightgray} R$>$| lines(pp1$RUL[order(pp1$RUL)], col = "blue") |\bf \color{lightgray} R$>$| plot(test$RUL[order(pp2$RUL.low)], |\bf \color{lightgray} +| ylim = c(min(pp2$RUL.low), max(pp2$RUL.up)), |\bf \color{lightgray} +| pch=16, col = "green", xlab = "unit", |\bf \color{lightgray} +| ylab = "RUL", main = "Viterbi-max method") |\bf \color{lightgray} R$>$| lines(sort(pp2$RUL.low), lty = 2, col = "red") |\bf \color{lightgray} R$>$| lines(pp2$RUL.up[order(pp2$RUL.low)], lty = 2, |\bf \color{lightgray} +| col = "red") |\bf \color{lightgray} R$>$| lines(pp2$RUL[order(pp2$RUL.low)], col = "blue") |\bf \color{lightgray} R$>$| plot(test$RUL[order(pp3$RUL.low)], |\bf \color{lightgray} +| ylim = c(min(pp3$RUL.low), max(pp3$RUL.up)), |\bf \color{lightgray} +| pch=16, col = "green", xlab = "unit", |\bf \color{lightgray} +| ylab = "RUL", main = "Smoothing-mean method") |\bf \color{lightgray} R$>$| lines(sort(pp3$RUL.low), lty = 2, col = "red") |\bf \color{lightgray} R$>$| lines(pp3$RUL.up[order(pp3$RUL.low)], lty = 2, |\bf \color{lightgray} +| col = "red") |\bf \color{lightgray} R$>$| lines(pp3$RUL[order(pp3$RUL.low)], col = "blue") |\bf \color{lightgray} R$>$| plot(test$RUL[order(pp4$RUL.low)], |\bf \color{lightgray} +| ylim = c(min(pp4$RUL.low), max(pp4$RUL.up)), |\bf \color{lightgray} +| pch=16, col = "green", xlab = "unit", |\bf \color{lightgray} +| ylab = "RUL", main = "Smoothing-max method") |\bf \color{lightgray} R$>$| lines(sort(pp4$RUL.low), lty = 2, col = "red") |\bf \color{lightgray} R$>$| lines(pp4$RUL.up[order(pp4$RUL.low)], lty = 2, |\bf \color{lightgray} +| col = "red") |\bf \color{lightgray} R$>$| lines(pp4$RUL[order(pp4$RUL.low)], col = "blue") \end{lstlisting} The resulting plots are presented in Figure \ref{RUL}. From Figure \ref{RUL} and the above coverage probabilities, one can see that the ``smoothing" and ``max" methods perform better than other methods, in this example. \begin{figure} \centerline{\includegraphics[width = 15cm]{RUL.png}} \caption{RUL estimates (solid blue lines) and RUL bounds (dashed red lines) using four different methods for the CMAPSS test data set.}\label{RUL} \end{figure} {\section{Concluding remarks} This paper presents several examples of the \proglang{R} package \pkg{hhsmm}. The scope of application of this package covers simulation, initialization, fitting, and prediction of HMM, HSMM, and HHSMM models, for different types of discrete and continuous sojourn distribution, including shifted Poisson, negative binomial, logarithmic, gamma, Weibull, and log-normal. This package contains density and M-step function for estimation of the emission distribution for different types of emission distribution, including the mixture of multivariate normals and penalized B-spline estimator of the emission distribution, the mixture of linear and additive regression (conditional multivariate normal distributions of the response given the covariates; regime-switching regression models) as well as the ability to define another emission distributions by the user. As a special case of the regime-switching regression models, the auto-regressive HHSMM models can be modeled by the \pkg{hhsmm} package. The left to right models are considered in the \pkg{hhsmm} package, especially in the initialization functions. The \pkg{hhsmm} package uses the EM algorithm to handle the missing values when the mixture of multivariate normals is considered as the emission distribution. The ability to predict the future states, residual useful lifetime estimation for the left to right models, computation of the score of new observations, computing the homogeneity of two sequences of states, and splitting the data to train and test sequences by the ability to right-trim the test sequences, are other useful features of the \pkg{hhsmm} package. The current version 0.3.2 of this package is now available on CRAN (\url{https://cran.r-project.org/package=hhsmm}), while the future improvements of this package are also considered by the authors. Any report of the possible bugs of the \pkg{hhsmm} package are welcome through \url{https://github.com/mortamini/hhsmm/issues} and we welcome the users' offers for any needed feature of the package in the future. \section*{Acknowledgements} The authors would like to thank the two anonymous referees and the associate editor for their useful comments and suggestions, which improved an earlier version of the \pkg{hhsmm} package and this paper. }
0906.2955
\section{Introduction} The Dark Energy Survey (DES) is on track for first light in 2011 and will carry out a deep optical and near-infrared survey of 5000 square degrees of the South Galactic Cap to $\sim$24th magnitude using a new 3 square-degree CCD camera (called DECam) to be mounted on the Blanco 4-meter telescope at CTIO. DES uses thicker CCDs from Lawrence Berkeley National Laboratory with greater red sensitivity as compared to previous surveys. In exchange for the camera, CTIO will provide DES with 525 nights on the Blanco spread over 5 years. The survey data will allow the measurement of the dark energy and dark matter densities and the dark energy equation of state through four independent methods: galaxy clusters, weak gravitational lensing tomography, galaxy angular clustering, and supernova (SN) distances. While the logistics of the SNe survey are still being finalized, time allocation within the larger survey will be $\sim$1000 hrs (yet to be finalized) with maximal use of non-photometric time (up to 500 hrs). Likewise, the spectroscopic follow-up strategy is still being fleshed out. The working estimate is currently 25\% with the remaining redshifts to be obtained via host-galaxy follow-up. The DES SN working group has undertaken simulations of DES observations with the goal of constraining the optimal SN survey strategy. Toward this end, we apply the SN simulation package (SNANA) developed by Kessler for the SDSS-II SN Survey and later modified for non-SDSS surveys. SNANA generates realistic light curves accounting for atmospheric seeing conditions, host-galaxy extinction, cadence, and intrinsic SN luminosity variations using MLCS2k2 (Jha {\em et al.} 2007 [1]) or SALT2 (Guy {\em et al.} 2007 [2]) models. The simulation errors include stat-noise from photo-statistics and sky noise. The package includes a light-curve fitter that shares many software tools, uses the MLCS2k2 model, with improvements, and fits in flux rather than magnitudes. In this paper, we present light curve simulations for DES and describe a high-redshift bias that arises when selection effects are not accounted for in the analysis. \section{The SNANA Package}\label{subsec:snana} SNANA uses a mixture of C and FORTRAN routines to simulate and fit SN light curves for a range of redshifts (\textit{z}). SNANA generates fitted distance moduli, $\mu$, and passes $\mu$--\textit{z} pairs to a cosmology fitter. It is publicly available \footnote{http://www.hep.anl.gov/des/snana\_package} and requires CFITSIO and CERNLIB. The simulation is designed to be fast, generating a few dozen light curves per second, while still providing accurate and realistic SN light curves. Using the package requires the generation of a survey library that includes the survey characteristics (e.g., the observing cadence, seeing conditions, and CCD properties). Generating this library is easy post-survey; predicting it before the survey is crucial to making realistic predictions for the light-curve quality. The light curve fitter takes longer to run, up to many hours depending on the number of SNe and number of fit parameters. \begin{figure}% \subfloat[An SNANA light curve for redshift \textit{z}$\sim$0.27.]{\includegraphics[angle=0,scale=0.4]{lc1.eps}}\hfill \subfloat[An SNANA light curve for redshift \textit{z}$\sim$0.75.]{\includegraphics[angle=0,scale=0.4]{lc2.eps}}\\[-10pt] \captionsetup[figure]{margin=10pt}% \caption{Plotted is flux vs. time in days. Points are the simulated data, the blue dashed line is with no extinction or fluctuations, and the solid and dashed green lines are the best realistic fit and error bounds. The ``red bump'' at $\sim$40 days is characteristic of SNe Ia, is clearly evident in the Y band for \textit{z}$\sim$0.27, and fades for \textit{z}$\sim$0.75.} \label{fig:lc} \end{figure} \section{Simulations}\label{sec:sims} For the simulations presented here, we employed the MLCS2k2 model as the basis for generating and fitting SNe light curves. The free parameters are the epoch of maximum light in the B-band ($t_{\rm o}$), the distance modulus ($\mu$), the luminosity/light curve shape parameter ($\Delta$), and the extinction in magnitudes by dust in the host galaxy (parameterized by A$_{\rm V}$ and R$_{\rm V}$ from Cardelli {\em et al.} 1989 [3]). Note that for this work we fixed R$_{\rm V}$ = 3.1 (the average for the Milky Way) but will explore fitting R$_{\rm V}$ in the near future. Fig.~\ref{fig:lc} shows example light curves. The DES supernova working group has begun optimizing DES SN survey strategy by exploring the 1) choice of z-like filter and 2) the survey depth. Under consideration are the griz, griZ$_1$Y, and griZ$_2$Y filter sets (see Fig.~\ref{fig:filt}). The griz filters are SDSS-like and the Y filter occupies the clean wavelength range between the atmospheric absorption bands at 0.95$\mu$m and 1.14$\mu$m. Z$_1$ avoids the overlap with Y and Z$_2$ avoids the Y overlap \textit{and} the lower atmospheric absorption feature. Also under consideration are 3, 9, and 27 square-degree fields corresponding to ``ultra-deep'' (but narrow = 1 DES field), ``deep'', and ``wide'' (but shallow) surveys. Results to date show that the survey depth has a much greater effect than does the choice of z-like filter. Therefore, we henceforth show examples using the z filter. Fig.~\ref{fig:sne} shows our DES light curve fits. \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.715]{filters.eps} \caption{The choice of DES z filters plotted with the DES quantum efficiency.} \label{fig:filt} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.8]{fits.eps} \caption{\textit{Left}: Number of SNe and the Hubble diagram (grizY filter set). \textit{Right}: The redshift run of the difference in fitted (``observed'') and simulated (``true'') distance modulus ($\mu$), host extinction parameter (A$_{\rm V}$), and MLCS luminosity/shape parameter ($\rm\Delta$). \textit{Both}: cuts were applied by the fitter such that each SN had at least 5 measurements and one filter measurement with a signal to noise above 10 and any 3 filters above 5. Note that the large error bars and deviations at the lowest (\textit{z}$<$0.1) and highest (\textit{z}$>$1.2) redshifts are due to low statistics. } \label{fig:sne} \end{center} \end{figure} \section{Discussion}\label{sec:disc} Fig.~\ref{fig:sne} shows a $\mu$ bias manifest in the difference between fitted and simulated $\mu$ beyond \textit{z}$\sim$0.6. The bias arises from not accounting for selection efficiencies and illustrates the magnitude of the $\mu$-correction that will be needed. The fact that A$_{\rm V}$ trends to zero beyond \textit{z}$\sim$0.6 is consistent with a selection bias as we interpret that to mean that only less extincted SNe can pass the cuts as redshift increases. Fig.~\ref{fig:sne} also shows that the deep survey offers a substantial improvement in statistics relative to the ultra-deep survey while avoiding a significant portion of the bias suffered by the wide survey. Thus, we will move forward in constraining DES SN strategy by considering both a deep survey and a hybrid approach with a mixture deep and wide fields. Calculations made for the DES project proposal estimated that the survey would offer an improvement in the the Dark Energy Task Force figure of merit (fom) by a factor of 4.6 relative to current SN surveys. The DES SN working group has implemented a cosmology fitter in order to obtain a more robust calculation of the fom for DES by harnessing SNANA simulated SN surveys. We currently have statistics-only fom estimates and are working on furthering our SNANA analysis to account for estimates of DES SN systematics. Once completed, we will use SNANA to constrain the optimal DES SN survey strategy and produce a detailed white paper. \section*{References}
1806.04607
\section{Introduction} Many real world processes are studied by means of difference equations. Because of their wide range of applications in mechanics, economics, electronics, chemistry, ecology, biology, etc., the theory of discrete dynamical systems has been under intensive development and many researchers have been paying their attention to the study of these systems \cite{agarwal, elaydi, HK, kelley, kocic, ladas, KM, laks2, hassan, Wiggins}. The equation \begin{align} x_{n+1}=\alpha + \frac{x_{n-1}}{x_n}, \qquad n=0,1,2,\ldots \label{eqn} \end{align} was investigated by many researchers. The equation \eqref{eqn}, for $\alpha \in [0,\infty)$ and the initial conditions $x_{-1}$ and $x_0$ being arbitrary positive real numbers, has been considered in \cite{amleh}. There, the authors analyzed the global stability, the boundedness character, and the periodic nature of the positive solutions of \eqref{eqn}. The global stability, the permanence, and the oscillation character of the recursive equation \eqref{eqn} for nonnegative values of the parameter $\alpha$ with negative initial conditions $x_{-1}$ and $x_0$ was investigated in \cite{hamza}. The same equation for $\alpha< 0$ was taken into account in \cite{stevic2}. The global bifurcation result for \eqref{eqn} was obtained in \cite{burgic} and the asymptotic approximations of the stable and unstable manifolds of the fixed point of \eqref{eqn} were discussed in \cite{kul}. In this work, we consider the difference equation \begin{align} x_{n+1}=\alpha+\beta x_{n-1} + \frac{x_{n-1}}{x_n}, \qquad n=0,1,2,\ldots \label{main} \end{align} where $\alpha \geqslant 0,$ $0 \leqslant \beta < 1,$ and the initial conditions $x_{-1}$ and $x_0$ are positive real numbers. Clearly, when $\beta=0,$ the equation \eqref{main} reduces to \eqref{eqn}. For this reason, the results obtained in the current paper covers those given in \cite{kul}. Equation \eqref{main} has the unique fixed point \begin{align} \label{fp} \bar{x}=\frac{1+\alpha}{1-\beta}. \end{align} Letting $y_n=x_{n-1}$ and $z_n=x_n,$ \eqref{main} can be written as \begin{align} \label{mainsys} \begin{array}{l} y_{n+1} = z_n \\ z_{n+1} = \alpha + \beta y_n + \displaystyle\frac{y_n}{z_n} \end{array} \end{align} together with the initial conditions $y_0=x_{-1},$ $z_0=x_0.$ Introducing the mapping \begin{align} \label{t} T\begin{pmatrix} y \\ z \end{pmatrix} = \begin{pmatrix} z \\ \alpha + \beta y + \displaystyle\frac{y}{z} \end{pmatrix}, \end{align} \eqref{mainsys} is written as \begin{align*} \begin{pmatrix} y_{n+1} \\ z_{n+1} \end{pmatrix} = T\begin{pmatrix} y_n \\ z_n \end{pmatrix}. \end{align*} $T$ has a unique fixed point $(\bar{x}, \bar{x})$ where $\bar{x}$ is given by \eqref{fp}. The following result for \eqref{main} was given in \cite{at}: \begin{theorem} Let $0\leqslant \beta < 1.$ For the equation \eqref{main}, one has: \begin{itemize} \item If $0\leqslant \alpha < 1,$ the equilibrium point $\bar{x}$ is unstable; \item If $\alpha = 1,$ then there exists periodic solutions with period 2. Moreover, any non periodic solution of \eqref{main} converges either to the fixed point or to a two-periodic solution; \item If $\alpha >1,$ then the equilibrium point $\bar{x}$ is globally asymptotically stable. \end{itemize} \end{theorem} To complete the global dynamics of \eqref{main}, the present paper addresses the equations of stable and unstable manifolds of the equilibrium solution and the stable manifold of period-two solutions of \eqref{main}. The following definition of the stable and unstable manifolds and the next theorem about their existence can be found in \cite[Definition 15.18, Theorem 15.19, pp.457]{HK} and also in \cite{mars}. We present these only with a minor change in notations for the convenience of the present paper. \begin{defn*} Let ${\mathcal N}$ be a neighborhood of a fixed point $\bar{x}$ of a diffeomorphism $T$ defined in ${\mathcal N}.$ Then, the local stable manifold $W^s(\bar{x}, {\mathcal N}),$ and the local unstable manifold $W^u(\bar{x}, {\mathcal N})$ of $\bar{x}$ are defined, respectively, to be the following subsets of ${\mathcal N}:$ \begin{align*} W^s(\bar{x}, {\mathcal N})=\{{\bf x}\in {\mathcal N}: T^n({\bf x})\in {\mathcal N}, \ \text{for all } \ n\geqslant 0, \ \text{and} \ T^n({\bf x})\to \bar{x}, \ \text{as} \ n\to\infty \} \\ W^u(\bar{x}, {\mathcal N})=\{{\bf x}\in {\mathcal N}: T^{-n}({\bf x})\in {\mathcal N}, \ \text{for all } \ n\geqslant 0, \ \text{and} \ T^{-n}({\bf x})\to \bar{x}, \ \text{as} \ n\to\infty \} \end{align*} \end{defn*} \begin{theorem}[Stable and Unstable Manifolds]\label{thminv} Let $T$ be be a diffeomorphism with a hyperbolic saddle point $\bar{x},$ that is, the linearized map $DT(\bar{x})$ at the fixed point has nonzero eigenvalues $|\lambda_1|<1$ and $|\lambda_2|>1.$ Then $W^s(\bar{x}, {\mathcal N})$ is a curve tangent at $\bar{x}$ to, and a graph over, the eigenspace corresponding to $\lambda_1,$ while $W^u(\bar{x}, {\mathcal N})$ is a curve tangent at $\bar{x}$ to, and a graph over, the eigenspace corresponding to $\lambda_2$. These curves are as smooth as the map $T.$ \end{theorem} For the following theorem see \cite[Theorem 6, pp 34]{carr}. \begin{theorem}[Center Manifold]\label{thmc} Let $T:{\mathbb R}^{n+m}\to{\mathbb R}^{n+m}$ have the following form: $$T(x,y)=(Ax+f(x,y), \ Bx+g(x,y))$$ where $x\in{\mathbb R}^{n},$ $y\in{\mathbb R}^{m},$ $A$ and $B$ are square matrices such that each eigenvalue of $A$ has modulus 1 and each eigenvalue of $B$ has modulus less than 1, $f$ and $g$ are $C^2$ and $f, g$ and their first order derivatives are zero at the origin. Then, there exists a center manifold $h:{\mathbb R}^n\to{\mathbb R}^m$ for $T.$ More precisely, for some $\varepsilon >0$ there exists a $C^2$ function $h:{\mathbb R}^n\to{\mathbb R}^m$ with $h(0)=h'(0)=0$ such that $|x|<\varepsilon$ and $(x_1, y_1)=T(x,h(x))$ implies $y_1=h(x_1).$ \end{theorem} The paper is organized as follows: In the next chapter, the normal form of the map $T$ and the equations of unstable and stable manifolds of the equilibrium solution are given. Chapter 3 deals with the normal form and invariant manifolds of the map $T^2.$ Finally, Chapter 4 is devoted to some numerical examples to illustrate the theoretical results. \section{Normal form and invariant manifolds of the map $T$} \subsection{Normal Form} To obtain the normal form of the map $T,$ first, we transform its fixed point to the origin. For this, let $u_n=y_n-\bar{x}$ and $v_n=z_n-\bar{x}.$ Then, \eqref{mainsys} becomes \begin{align} \label{mainsys2} \begin{array}{l} u_{n+1}=v_n\\ v_{n+1}=\beta u_n+\displaystyle\frac{u_n-v_n}{v_n+\bar{x}} \end{array}. \end{align} For the mapping \begin{align*} F\begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} v\\ \beta u+ \frac{u-v}{v+\bar{x}} \end{pmatrix}, \end{align*} \eqref{mainsys2} is written as \begin{align*} \begin{pmatrix} u_{n+1} \\ v_{n+1} \end{pmatrix} = F\begin{pmatrix} u_n \\ v_n \end{pmatrix}. \end{align*} The Jacobian matrix of $F$ at its unique fixed point $(0,0)$ is \begin{align*} J= \begin{pmatrix} 0 & 1\\ \beta+\frac{1}{\bar{x}} & -\frac{1}{\bar{x}} \end{pmatrix} \end{align*} which has the eigenvalues \begin{align} \lambda_{1} = \frac{-1-\theta}{2\bar{x}} \quad \text{and} \quad \lambda_{2} = \frac{-1+\theta}{2\bar{x}}\label{evalues} \end{align} with the corresponding eigenvectors \begin{align} {\bf v}_1 = \left(\frac{-2\bar{x}}{1+\theta} \:, \: \: 1\right)^T \quad \text{and} \quad {\bf v}_2 = \left(\frac{-2\bar{x}}{1-\theta} \:, \: \: 1\right)^T \label{evectors} \end{align} respectively, where $\theta=\sqrt{1+4\bar{x}+4\beta\bar{x}^2}.$ Thus, \begin{align*} F \begin{pmatrix} u \\ v \end{pmatrix} = J\cdot \begin{pmatrix} u \\ v \end{pmatrix} + H \begin{pmatrix} u \\ v \end{pmatrix} \end{align*} where \begin{align*} H \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} 0 \\ \frac{v(v-u)}{\bar{x}(v+\bar{x})} \end{pmatrix}. \end{align*} Thus, \eqref{mainsys} is equivalent to \begin{align} \label{maineq} \begin{pmatrix} u_{n+1}\\ v_{n+1} \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ \beta+\frac{1}{\bar{x}} & -\frac{1}{\bar{x}} \end{pmatrix} \begin{pmatrix} u_n\\ v_n \end{pmatrix} +H \begin{pmatrix} u_n\\ v_n \end{pmatrix}. \end{align} Set $P=({\bf v}_1 \: {\bf v}_2),$ where ${\bf v}_1$ and ${\bf v}_2$ are given by \eqref{evectors}, and let \begin{align} \label{uvp} \begin{pmatrix} u_n \\ v_n \end{pmatrix} = P\cdot \begin{pmatrix} \xi_n \\ \eta_n \end{pmatrix}. \end{align} Then, \eqref{maineq} becomes \begin{align} \begin{pmatrix} \xi_{n+1} \\ \eta_{n+1} \end{pmatrix} = \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix} \begin{pmatrix} \xi_n \\ \eta_n \end{pmatrix} + \begin{pmatrix} f(\xi_n, \eta_n) \\ g(\xi_n, \eta_n) \end{pmatrix} \label{normal} \end{align} where \begin{align} \label{fg} \begin{array}{l} f(\xi, \eta)= \displaystyle\frac{(1+2\beta\bar{x})(\xi+\eta)^2+\theta(\xi^2-\eta^2)}{\theta(\theta-1)(\xi+\eta+\bar{x})},\\ g(\xi, \eta)= \displaystyle\frac{(1+2\beta\bar{x})(\xi+\eta)^2+\theta(\xi^2-\eta^2)}{\theta(\theta+1)(\xi+\eta+\bar{x})}. \end{array} \end{align} System \eqref{normal} is called the normal form of \eqref{mainsys}. \subsection{Unstable manifold of the equilibrium solution} Let $0<\alpha<1.$ Then, as it is stated in \cite{at}, the fixed point $\bar{x}$ of \eqref{main} is unstable. In fact, it can be shown that $|\lambda_1|>1$ and $|\lambda_2|<1.$ Then, by Theorem \ref{thminv}, there is an unstable manifold $W^u$ which is the graph of an analytic map $\varphi:E_1 \to E_2$ such that $\varphi(0)=\varphi'(0)=0.$ Let $$\varphi(\xi)=a_2\xi^2+a_3\xi^3+O(\xi^4), \quad a_2, a_3 \in{\mathbb R}.$$ Now, we shall compute $a_2$ and $a_3.$ On the manifold $W^u,$ we have $\eta_n=\varphi(\xi_n)$ for $n\in {\mathbb N}_0.$ Thus, the function $\varphi$ must satisfy \begin{align} \varphi(\lambda_1\xi+f(\xi,\varphi(\xi)))=\lambda_2\varphi(\xi)+g(\xi, \varphi(\xi)) \label{unsman} \end{align} where $f$ and $g$ are given in \eqref{fg}. Rewriting \eqref{unsman} as a polynomial equation in $\xi$ and equation the coefficients of $\xi^2$ and $\xi^3$ to 0, we obtain \begin{align} a_2=\frac{1+\theta+2\beta\bar{x}}{\theta(\theta+1)(\lambda_1^2-\lambda_2)\bar{x}} \label{a2} \end{align} and \begin{align} a_3=\frac{a_2}{(\lambda_1^3-\lambda_2)\bar{x}}\left[\lambda_2-\lambda_1^2-\frac{1+2\beta\bar{x}}{\theta\bar{x}}\left(\frac{1}{\lambda_1}+\frac{\lambda_1}{\lambda_2}\right)-\frac{\lambda_1}{\lambda_2\bar{x}}\right]. \label{a3} \end{align} The local unstable manifold is obtained locally as the graph of the map $\varphi(\xi)=a_2\xi^2+a_3\xi^3.$ Since $\eta_n=a_2\xi_n^2+a_3\xi_n^3,$ using \eqref{uvp} and $u_n=x_{n-1}-\bar{x},$ $v_n=x_n-\bar{x},$ we can approximate locally the local unstable manifold $W_{loc}^u$ of \eqref{main} as the graph of $\tilde{\varphi}(x)$ such that $U(x, \tilde{\varphi}(x))=0$ where \begin{align} U(x,y):=\gamma_1(x-\bar{x})-\gamma_2(y-\bar{x}) +a_2\left[\gamma_1(x-\bar{x})+\gamma_3(y-\bar{x})\right]^2 +a_3\left[\gamma_1(x-\bar{x})+\gamma_3(y-\bar{x})\right]^3 \label{manU} \end{align} in which \begin{align} \label{gam} \gamma_1=\frac{1+\beta\bar{x}}{-\theta},\quad \gamma_2=\frac{\theta-1}{2\theta} \quad \text{and} \quad \gamma_3=\frac{\theta+1}{2\theta}. \end{align} It is easy to see that the function $\tilde{\varphi}(x)$ satisfies \begin{align*} \tilde{\varphi}(\bar{x})=\bar{x} \quad \textnormal {and} \quad \tilde{\varphi}'(\bar{x})=-\frac{1+\theta}{2\bar{x}}. \end{align*} Thus, we have proved the following theorem: \begin{theorem} \label{thmusman} The local unstable manifold of \eqref{main} corresponding to the saddle point $\bar{x}$ has the asymptotic equation $U(x,\tilde{\varphi}(x))=0$ where $U(x,y)$ is given by \eqref{manU}. \end{theorem} \subsection{Stable manifold of the equilibrium solution} Since $|\lambda_1|>1$ and $|\lambda_2|<1,$ by Theorem \ref{thminv}, there is a stable manifold $W^s$ which is the graph of an analytic map $\psi:E_1 \to E_2$ such that $\psi(0)=\psi'(0)=0.$ Let $$\psi(\eta)=b_2\eta^2+b_3\eta^3+O(\eta^4), \quad b_2, b_3 \in{\mathbb R}.$$ Now, we shall compute the coefficients $b_2$ and $b_3.$ On the manifold $W^s,$ we have $\xi_n=\psi(\eta_n)$ for $n\in {\mathbb N}_0.$ Thus, the function $\psi$ must satisfy \begin{align} \psi(\lambda_2\eta+g(\psi(\eta),\eta))=\lambda_1\psi(\eta)+f(\psi(\eta),\eta) \label{sman} \end{align} where $f$ and $g$ are given in \eqref{fg}. Rewriting \eqref{sman} as a polynomial equation in $\eta$ and equating the coefficients of $\eta^2$ and $\eta^3$ to 0, we obtain \begin{align} b_2=\frac{1-\theta+2\beta\bar{x}}{\theta(\theta-1)(\lambda_2^2-\lambda_1)\bar{x}} \label{b2} \end{align} and \begin{align} b_3=\frac{b_2}{(\lambda_b^3-\lambda_1)\bar{x}}\left[\lambda_1-\lambda_2^2+\frac{1+2\beta\bar{x}}{\theta\bar{x}}\left(\frac{1}{\lambda_2}+\frac{\lambda_2}{\lambda_1}\right)-\frac{\lambda_2}{\lambda_1\bar{x}}\right]. \label{b3} \end{align} The local stable manifold is obtained locally as the graph of the map $\psi(\eta)=b_2\eta^2+b_3\eta^3.$ Since $\xi_n=b_2\eta_n^2+b_3\eta_n^3,$ using \eqref{uvp} and $u_n=x_{n-1}-\bar{x},$ $v_n=x_n-\bar{x},$ we can approximate locally the local stable manifold $W_{loc}^s$ of \eqref{main} as the graph of $\tilde{\psi}(y)$ such that $S(\tilde{\psi}(y), y)=0$ where \begin{align} S(x,y):=\gamma_1(x-\bar{x})+\gamma_3(y-\bar{x}) -b_2\left[\gamma_1(x-\bar{x})-\gamma_2(y-\bar{x})\right]^2 +b_3\left[\gamma_1(x-\bar{x})-\gamma_2(y-\bar{x})\right]^3. \label{manS} \end{align} It is easy to see that the function $\tilde{\psi}(x)$ satisfies \begin{align*} \tilde{\psi}(\bar{x})=\bar{x} \quad \textnormal {and} \quad \tilde{\psi}'(\bar{x})=\frac{2\bar{x}}{\theta-1}. \end{align*} Thus, we have proved the following theorem: \begin{theorem} \label{thmsman} The local stable manifold of \eqref{main} corresponding to the saddle point $\bar{x}$ has the asymptotic equation $S(\tilde{\psi}(y),y)=0$ where $S(x,y)$ is given by \eqref{manS}. \end{theorem} \section{Normal form and invariant manifold of the map $T^2$} \subsection{Normal Form} For the map $T$ given by \eqref{t}, one has \begin{align}\label{t2} T^2\begin{pmatrix} y \\ z \end{pmatrix} = \begin{pmatrix} 1+\beta y + y/z \\ 1+\beta z + \frac{z}{1+\beta y + y/z} \end{pmatrix}. \end{align} That is, \begin{align} \begin{pmatrix} y_{n+2} \\ z_{n+2} \end{pmatrix} = T^2\begin{pmatrix} y_n \\ z_n \end{pmatrix}. \label{yzt2} \end{align} Firstly, we note that the fixed point $(\bar{x},\bar{x})$ of $T$ is also a fixed point of $T^2.$ That is why, in this section, the fixed point $(\bar{x},\bar{x})$ of $T^2$ is ignored, and the main focus will be on the other fixed points. As it was shown in \cite{at}, when $\alpha=1,$ equation \eqref{main} has infinitely many 2-periodic solutions each of which corresponds to a fixed point of $T^2.$ Indeed, if $\Phi>1/(1-\beta),$ then all the fixed points of $T^2$ are given by $(\Phi, \Psi)$ where $\Psi=\Phi/[(1-\beta)\Phi-1]).$ It is worth mentioning that $\Psi>1/(1-\beta)$ and for the initial conditions $x_{-1}=\Phi$ and $x_0=\Psi,$ the solution of \eqref{main} is $\{\Phi, \Psi, \Phi, \Psi, \ldots\}.$ Swapping the initial values produces the periodic solution $\{\Psi, \Phi, \Psi, \Phi, \ldots\}.$ As it was done in the previous section, the fixed point $(\Phi, \Psi)$ will be transformed to the origin. For this, let $u=y-\Phi$ and $v=z-\Psi.$ Then, we get the map \begin{align} \label{F0} F_0\begin{pmatrix} u \\ v \end{pmatrix} :=T^2\begin{pmatrix} u+\Phi \\ v+\Psi \end{pmatrix} -\begin{pmatrix} \Phi \\ \Psi \end{pmatrix} =\begin{pmatrix} \beta u + \frac{u + \Phi}{v+\Psi} - \frac{\Phi}{\Psi}\\ \beta v + \frac{(v + \Psi)^2}{v+\Psi+(u+\Phi)(1+\beta v+\beta\Psi)} - \frac{\Psi}{\Phi} \end{pmatrix}, \end{align} for which \eqref{yzt2} can be written as \begin{align} \begin{pmatrix} u_{n+2} \\ v_{n+2} \end{pmatrix} = F_0\begin{pmatrix} u_n \\ v_n \end{pmatrix}. \label{unvn} \end{align} It is clear to see that $(0,0)$ is a fixed point of $F_0$ and the Jacobian of $F_0$ at this fixed point is \begin{align}\label{J0} J_0=\begin{pmatrix} \beta+\frac{1}{\Psi} & -\frac{\Phi}{\Psi^2} \\ -\frac{\beta \Psi+1}{\Phi^2} & \beta+\frac{1}{\Phi}+\frac{1}{\Psi\Phi} \end{pmatrix}. \end{align} Using the relation $\frac{1}{\Phi}+\frac{1}{\Psi}=1-\beta,$ one can rewrite \eqref{J0} as \begin{align*} J_0=\begin{pmatrix} 1-\frac{1}{\Phi} & -\frac{\Phi}{\Psi^2} \\ \frac{\Psi(1-\Phi)}{\Phi^3} & 1-\frac{1}{\Psi}+\frac{1}{\Psi\Phi} \end{pmatrix}. \end{align*} from which it can be derived that the eigenvalues of $J_0$ are \begin{align*} \lambda_{01}=\left(1-\frac{1}{\Phi}\right)\left(1-\frac{1}{\Psi}\right) \quad \text{and} \quad \lambda_{02}=1. \end{align*} Since $0<\lambda_{01}<1$ and $\lambda_{02}=1,$ the fixed point $(\Phi, \Psi)$ is stable. The eigenvectors corresponding to the eigenvalues $\lambda_{01}$ and $\lambda_{02}$ are \begin{align} {\bf v}_{01}= \left(\frac{\Phi^2}{(\Phi-1)\Psi} \:, \: \: 1\right)^T \quad \text{and} \quad {\bf v}_{02}= \left(-\frac{\Phi^2}{\Psi^2} \:, \:\: 1\right)^T, \label{ev0} \end{align} respectively. Thus, the map obtained in \eqref{F0} can be written as \begin{align} \label{F02} F_0\begin{pmatrix} u \\ v \end{pmatrix} =J_0 \cdot \begin{pmatrix} u \\ v \end{pmatrix} +H_0\begin{pmatrix} u \\ v \end{pmatrix} \end{align} where \begin{align*} H_0\begin{pmatrix} u \\ v \end{pmatrix} =\begin{pmatrix} \frac{v(\Phi v-\Psi u)}{\Psi^2(v+\Psi)} \\ \frac{(v + \Psi)^2}{v+\Psi+(u+\Phi)(1+\beta v+\beta\Psi)} + \frac{(\beta\Psi+1)u}{\Phi^2} - \frac{(\Psi+1)v}{\Psi\Phi} - \frac{\Psi}{\Phi} \end{pmatrix}. \end{align*} Therefore, \eqref{yzt2} is equivalent to \begin{align} \begin{pmatrix} u_{n+2} \\ v_{n+2} \end{pmatrix} = \begin{pmatrix} 1-\frac{1}{\Phi} & -\frac{\Phi}{\Psi^2} \\ \frac{\Psi(1-\Phi)}{\Phi^3} & 1-\frac{1}{\Psi}+\frac{1}{\Psi\Phi} \end{pmatrix} \begin{pmatrix} u_n \\ v_n \end{pmatrix} + H_0 \begin{pmatrix} u_n \\ v_n \end{pmatrix}. \label{unvn2} \end{align} Set $P_0=({\bf v}_{01} \: {\bf v}_{02}),$ where ${\bf v}_{01}$ and ${\bf v}_{02}$ are given by \eqref{ev0}, and let \begin{align} \label{uvp0} \begin{pmatrix} u \\ v \end{pmatrix} = P_0\cdot \begin{pmatrix} \xi \\ \eta \end{pmatrix}. \end{align} Then, \eqref{unvn2} leads to \begin{align} \begin{pmatrix} \xi_{n+2} \\ \eta_{n+2} \end{pmatrix} = \begin{pmatrix} \lambda_{01} & 0 \\ 0 & \lambda_{02} \end{pmatrix} \begin{pmatrix} \xi_n \\ \eta_n \end{pmatrix} + \begin{pmatrix} f_0(\xi_n, \eta_n) \\ g_0(\xi_n, \eta_n) \end{pmatrix} \label{norm0} \end{align} where \begin{align} \label{f0g0} \begin{array}{l} f_0(\xi, \eta)= \displaystyle \frac{\Phi-1}{\Psi+\Phi-1}\left( \zeta-\frac{\xi}{\Phi\Psi}-\frac{\eta(\Phi+\Psi)}{\Phi(\xi+\eta+\Psi)}+\frac{(\xi+\eta)\xi}{\Phi(1-\Phi)(\xi+\eta+\Psi)}\right), \\ g_0(\xi, \eta)= \displaystyle \frac{\Psi}{\Psi+\Phi-1}\left(\zeta-\frac{\eta(\Phi+\Psi)}{\Phi\Psi}-\frac{\xi}{\Phi(\xi+\eta+\Psi)}+\frac{(1-\Phi)(\Phi+\Psi)(\xi+\eta)\eta}{\Phi\Psi^2(\xi+\eta+\Psi)}\right). \end{array} \end{align} and $$\zeta=\frac{(\xi+\eta+\Psi)^2}{\xi+\eta+\Psi+\left(\frac{\Phi^2\xi}{(\Phi-1)\Psi}-\frac{\Phi^2\eta}{\Psi^2}+\Phi\right)(1+\beta \xi+\beta\eta+\beta\Psi)}-\frac{\Psi}{\Phi}.$$ System \eqref{norm0} is the normal of \eqref{yzt2}. \subsection{Stable set of 2-periodic solution $\{(\Phi, \Psi), (\Psi, \Phi)\}$} Since $0<\lambda_{01}<1$ and $\lambda_{02}=1,$ by Theorem \ref{thmc}, there is an invariant curve ${\mathcal C}$ (called center manifold) which is the graph of an analytic map $h$ such that $h(0)=h'(0)=0.$ Let \begin{align*} h(\xi)=c_2\xi^2+c_3\xi^3+O(\xi^4), \quad c_2, c_3 \in{\mathbb R}. \end{align*} Now, we shall compute $c_2$ and $c_3.$ The function $h$ must satisfy \begin{align} h(\lambda_{01}\xi+f_0(\xi, h(\xi)))=h(\xi)+g_0(\xi,h(\xi)) \label{cman} \end{align} where $\lambda_{01}$ is given in \eqref{ev0}, $f_0$ and $g_0$ are given in \eqref{f0g0}. Rewriting \eqref{cman} as a polynomial equation in $\xi$ and equation the coefficients of $\xi^2$ and $\xi^3$ to 0, we obtain \begin{align}\label{c2} c_2=\frac{\Phi}{(1-\Phi)(\Phi+\Psi-1)(2\Phi\Psi-\Phi-\Psi+1)} \end{align} and \begin{align}\label{c3} c_3=\frac{\Phi^4\kappa_1+\Phi^3\kappa_2-\Phi^2\kappa_3+\kappa_4\Phi} {\Psi(\Psi+\Phi-1)(1-\Phi)\kappa_5}\, c_2 \end{align} where \begin{align*} \kappa_1&=3\Psi^2-4\Psi+1\\ \kappa_2&=3\Psi^3-12\Psi^2+9\Psi-1\\ \kappa_3&=5\Psi^3-14\Psi^2+6\Psi+1\\ \kappa_4&=2\Psi^3-4\Psi^2+\Psi+1\\ \kappa_5&=\Phi^2(3\Psi^2-3\Psi+1)-\Phi(3\Psi^2-5\Psi+2)+(\Psi-1)^2. \end{align*} Using $\eta_{2n}=c_2\xi_{2n}^2+c_3\xi_{2n}^3,$ the relation \eqref{uvp0} together with $u_{2n}=x_{2n-2}-\Phi$ and $v_{2n}=x_{2n}-\Psi,$ we can approximate locally the invariant curve ${\mathcal C}$ as the graph of $\tilde{h}(x)$ such that $C(x, \tilde{h}(x))=0$ where \begin{align*} C(x,y;\Phi):=\delta_1(x-\Phi)-\delta_2(y-\Psi) +c_2\left[\delta_1(x-\Phi)+\delta_3(y-\Psi)\right]^2 +c_3\left[\delta_1(x-\Phi)+\delta_3(y-\Psi)\right]^3, \end{align*} in which \begin{align} \label{del} \delta_1=\frac{\Psi^2(\Phi-1)}{\Phi^2(\Phi+\Psi-1)},\quad \delta_2=\frac{\Psi}{\Phi+\Psi-1} \quad \text{and} \quad \delta_3=\frac{\Phi-1}{\Phi+\Psi-1}. \end{align} It is easy to see that the function $\tilde{h}(x)$ satisfies \begin{align*} \tilde{h}(\Phi)=\Psi \quad \textnormal {and} \quad \tilde{h}'(\Phi)=\frac{\Psi(\Phi-1)}{\Phi^2}. \end{align*} Thus, we have proved the following theorem: \begin{theorem} \label{thmcman} Let $\Phi>1/(1-\beta)$ and $\Psi=\Phi/[(1-\beta)\Phi-1].$ Then corresponding to the non-hyperbolic period-two solution $\{(\Phi, \Psi), (\Psi, \Phi)\},$ there is an invariant curve which is the union of two curves that are locally given with the asymptotic expansions $C(x,\tilde{h}(x);\Phi)=0$ and $C(x,\tilde{h}(x);\Psi)=0.$ \end{theorem} \section{Numerical Examples} In this section, some illustrative examples supporting the theoretical results presented in this article will be constructed. To compare the current results with those given in \cite{kul}, we first take the parameter values as in \cite{kul}. \begin{example} For $\alpha=0.2,$ $\beta=0,$ which is the case $p=0.2$ in \cite[Section 3.4]{kul}, we have \begin{align*} U_1(x,y)&= -0.4152273992 x + 0.8491364395 - 0.2923863004 y \\ & \qquad +0.2419777563(-0.4152273992 x - 0.3508635604 + 0.7076136995 y)^2\\ & \qquad -0.0974600586(-0.4152273992 x - 0.3508635604 + 0.7076136995 y)^3,\\ S_1(x,y)&=-0.4152273992 x - 0.3508635604 + 0.7076136995 y \\ & \qquad +0.1961061968(-0.4152273992 x + 0.8491364395 - 0.2923863004 y)^2\\ & \qquad +0.09806508071(-0.4152273992 x + 0.8491364395 - 0.2923863004 y)^3. \end{align*} and \begin{align*} U_2(x,y)&= -0.3492151478 x + 1.214293633 -0.3253924261 y \\ & \qquad +0.3059452562(-0.3492151478 x - 0.5857063670 + 0.6746075740y)^2 \\ & \qquad -0.1066716833(-0.3492151478 x - 0.5857063670 + 0.6746075740y)^3, \\ S_2(x,y)&= -0.3492151478 x - 0.5857063670 + 0.6746075740 y \\ & \qquad + 0.1446549340(-0.3492151478 x + 1.214293633 - 0.3253924261y)^2 \\ & \qquad + 0.0525187072(-0.3492151478 x + 1.214293633 - 0.3253924261y)^3. \end{align*} Figure \ref{fig1} shows the graphs of the functions $U_1(x,y)=0,$ $S_1(x,y)=0,$ $U_2(x,y)=0$ and $S_2(x,y)=0$ together with a typical trajectory. As it can be seen, the trajectory follows the unstable manifold in both cases. \begin{figure}[!ht] \begin{center} \subfigure[Graphs of $U_1(x,y)=0$ (blue) and $S_1(x,y)=0$ (red) for $\alpha=0.2, \beta=0$]{ \resizebox*{60mm}{!}{\includegraphics{fig1a.eps}}}\hspace{8mm \subfigure[Graphs of $U_2(x,y)=0$ (blue) and $S_2(x,y)=0$ (red) for $\alpha=0.8, \beta=0$]{ \resizebox*{60mm}{!}{\includegraphics{fig1b.eps}}} \caption{\label{fig1} Graphs of stable and unstable manifolds together with a typical trajectory for different values of $\alpha$ and $\beta.$} \end{center} \end{figure} \end{example} We would like to note here that the functions $U_1(x,y),$ $S_1(x,y),$ $U_2(x,y)$ and $S_2(x,y),$ obtained in this example are not the same as those given in \cite{kul}. However, they are some certain constant multiples of each other, and hence, the manifolds provided here and given in \cite{kul} are the same. So, we recover the results given in \cite{kul} by taking $\beta=0.$ \begin{example} As another example let us keep $\alpha$ the same as in the previous example but change $\beta.$ For $\alpha=0.2,$ $\beta=0.5,$ we have \begin{align*} U_3(x,y)&= -0.4152273992 x + 0.8491364395 - 0.2923863004 y \\ & \qquad +0.2419777563(-0.4152273992 x - 0.3508635604 + 0.7076136995 y)^2\\ & \qquad -0.0974600586(-0.4152273992 x - 0.3508635604 + 0.7076136995 y)^3,\\ S_3(x,y)&=-0.4152273992 x - 0.3508635604 + 0.7076136995 y \\ & \qquad +0.1961061968(-0.4152273992 x + 0.8491364395 - 0.2923863004 y)^2\\ & \qquad +0.09806508071(-0.4152273992 x + 0.8491364395 - 0.2923863004 y)^3. \end{align*} and \begin{align*} U_4(x,y)&= -0.3492151478 x + 1.214293633 -0.3253924261 y \\ & \qquad +0.3059452562(-0.3492151478 x - 0.5857063670 + 0.6746075740y)^2 \\ & \qquad -0.1066716833(-0.3492151478 x - 0.5857063670 + 0.6746075740y)^3, \\ S_4(x,y)&= -0.3492151478 x - 0.5857063670 + 0.6746075740 y \\ & \qquad + 0.1446549340(-0.3492151478 x + 1.214293633 - 0.3253924261y)^2 \\ & \qquad + 0.0525187072(-0.3492151478 x + 1.214293633 - 0.3253924261y)^3. \end{align*} Figure \ref{fig2} shows the graphs of the functions $U_3(x,y)=0,$ $S_3(x,y)=0,$ $U_4(x,y)=0$ and $S_4(x,y)=0$ together with a typical trajectory. As it can be seen, the trajectory follows the unstable manifold in both cases. \begin{figure}[!ht] \begin{center} \subfigure[Graphs of $U_3(x,y)=0$ (blue) and $S_3(x,y)=0$ (red) for $\alpha=0.2, \beta=0.5$]{ \resizebox*{60mm}{!}{\includegraphics{fig2a.eps}}}\hspace{8mm \subfigure[Graphs of $U_4(x,y)=0$ (blue) and $S_4(x,y)=0$ (red) for $\alpha=0.8, \beta=0.5$]{ \resizebox*{60mm}{!}{\includegraphics{fig2b.eps}}} \caption{\label{fig2} Graphs of stable and unstable manifolds together with a typical trajectory for different values of $\alpha$ and $\beta.$} \end{center} \end{figure} \end{example} \begin{example} In this example, let us take the parameters as in \cite{kul}. Let $\alpha=1,$ $\beta=0.$ Let $\Phi=2.94.$ In this case, we have $\Psi=1.515463918$ and \begin{align*} C_1(x,y;\Phi)&=0.1491735785x+0.2260671754-0.4385703205y\\ &\qquad -0.08039102209(0.1491735785x-1.289396743+0.5614296795y)^2\\ &\qquad +0.01997063483(0.1491735785x-1.289396743+0.5614296795y)^3 \end{align*} and \begin{align*} C_1(x,y;\Psi)&=0.5614296798x+1.650603257-0.8508264215y \\ &\qquad -0.1559585827(0.5614296798x-1.289396743+0.1491735785y)^2 \\ &\qquad +0.05514400545(0.5614296798x-1.289396743+0.1491735785y)^3. \end{align*} For $\Phi=2.3,$ we have $\Psi=1.769230769$ and \begin{align*} C_2(x,y;\Phi)&=0.2506265664x+0.4434162323-0.5764411027y\\ &\qquad -0.1137137228(0.2506265664x-1.325814536+0.4235588973y)^2\\ &\qquad +0.03453170706(0.2506265664x-1.325814536+0.4235588973y)^3 \end{align*} and \begin{align*} C_2(x,y;\Psi)&=0.4235588973x+0.9741854634-0.7493734336y\\ &\qquad -0.1478278397(0.4235588973x-1.325814536+0.2506265664y)^2\\ &\qquad +0.0520650698(0.4235588973x-1.325814536+0.2506265664y)^3. \end{align*} Figure \ref{fig3} shows the graphs of the functions $C_1(x,y;\Phi)=0,$ $C_1(x,y;\Psi)=0$ and $C_2(x,y;\Phi)=0,$ $C_2(x,y;\Psi)=0,$ together with typical trajectories. As it can be seen, the trajectory follows the invariant manifold in both cases. \begin{figure}[!ht] \begin{center} \subfigure[Graphs of $C_1(x,y;\Phi)=0$ (blue) and $C_1(x,y;\Psi)=0$ (red) for $\Phi=2.94.$]{ \resizebox*{60mm}{!}{\includegraphics{fig3a.eps}}}\hspace{8mm \subfigure[Graphs of $C_2(x,y;\Phi)=0$ (blue) and $C_2(x,y;\Psi)=0$ (red) for $\Phi=2.3.$]{ \resizebox*{60mm}{!}{\includegraphics{fig3b.eps}}} \caption{\label{fig3} Graphs of invariant curves together with typical trajectories and periodic solutions for different values of $\Phi$ and $\beta=0.$} \end{center} \end{figure} \end{example} \begin{example} As the last example, let $\alpha=1,$ $\beta=0.5.$ For $\Phi=2.94,$ we have $\Psi=6.255319149$ and \begin{align*} C_3(x,y;\Phi)&=1.071618354x+1.623998947-0.7632795057y\\ &\qquad -0.006468848599(1.071618354x-4.631320202+0.2367204943y)^2\\ &\qquad +0.001026052614(1.071618354x-4.631320202+0.2367204943y)^3 \end{align*} and \begin{align*} C_3(x,y;\Psi)&=0.1416540319x+0.1686084427-0.3587413677y\\ &\qquad -0.005080796064(0.1416540319x-2.771391557+0.6412586323y)^2\\ &\qquad +0.001395071806(0.1416540319x-2.771391557+0.6412586323y)^3. \end{align*} For $\Phi=2.3,$ we have $\Psi=15.33333333$ and \begin{align*} C_4(x,y;\Phi)&=3.473613893x+6.145624586-0.9218436874y\\ &\qquad -0.001973405924(3.473613893x-9.187708748+0.07815631264y)^2\\ &\qquad +0.000140325572(3.473613893x-9.187708748+0.07815631264y)^3 \end{align*} and \begin{align*} C_4(x,y;\Psi)&=0.01938877756x+0.0207414829-0.1382765531y\\ &\qquad -0.001193222187(0.01938877756x-2.279258517+0.8617234469y)^2\\ &\qquad +0.0003847285557(0.01938877756x-2.279258517+0.8617234469y)^3. \end{align*} Figure \ref{fig4} shows the graphs of the functions $C_3(x,y;\Phi)=0,$ $C_3(x,y;\Psi)=0$ and $C_4(x,y;\Phi)=0,$ $C_4(x,y;\Psi)=0,$ together with typical trajectories. As it can be seen, the trajectory follows the invariant manifold in both cases. \begin{figure}[!ht] \begin{center} \subfigure[Graphs of $C_3(x,y;\Phi)=0$ (blue) and $C_3(x,y;\Psi)=0$ (red) for $\Phi=2.94.$]{ \resizebox*{60mm}{!}{\includegraphics{fig4a.eps}}}\hspace{8mm \subfigure[Graphs of $C_4(x,y;\Phi)=0$ (blue) and $C_4(x,y;\Psi)=0$ (red) for $\Phi=2.3.$]{ \resizebox*{60mm}{!}{\includegraphics{fig4b.eps}}} \caption{\label{fig4} Graphs of invariant curves together with typical trajectories and periodic solutions for different values of $\Phi$ and $\beta=0.5.$} \end{center} \end{figure} \end{example} \bigskip
1806.04536
\section{Introduction} In the experimental searches for new physics of the fundamental interactions at the LHC, the resolution of several open issues - which remain unanswered within the Standard Model (SM) - typically call for larger gauge structures and a wider particle content. Such are the gauge-hierarchy problem in the Higgs sector or the origin of light-neutrino masses, just to mention two of them, both requiring such extensions. Grand Unified Theories (GUTs) play certainly an important role in this: unfortunately, Grand Unification needs an energy scale ${\cal O}(10^{12}$-$10^{15})$~GeV, which is far higher than the electroweak one, currently probed at the LHC. Obviously, the process of identifying the main signatures of a certain symmetry-breaking pattern from the GUT scale down to the TeV scale is far from being trivial, due to the enlarged symmetries and to a parameter space of considerable size. There are, however, interesting scenarios where larger gauge symmetries can be discovered or ruled out already at the TeV scale, which suggests some alternative paths of exploration. Such is the case of a version of the $SU(3)_c\times SU(3)_L\times U(1)_X$ model, also known as 331 model \cite{PHF,PP,Valle}, where the requirement that the gauge couplings are real provides a significant upper bound on the physical region in which its signal could be searched for. This property sets the vacuum expectation values (vevs) of the Higgs bosons, which trigger the breaking from the 331-symmetry scale down to the electroweak one, around the TeV. The model that we consider allows for bileptons, i.e. gauge bosons $(Y^{--}, Y^{++})$ of charge $Q=\pm 2$ and lepton number $L=\pm 2$, and therefore we shall refer to it as the {\em bilepton model}. In the family of 331 models, bileptons in the spectrum are obtained only for special embeddings of the $U(1)_X$ symmetry and of the charge $(Q)$ and hypercharge $(Y)$ generators in the local gauge structure. One additional feature of the model is that, unlike the Standard Model or most chiral models formulated so far, extending the SM spectrum and symmetries, the number of (chiral) fermion generations is underwritten by the cancellation of the gauge anomalies. Gauge anomalies cancel among different fermion families, and select the number of generations to be 3: from this perspective, the model appears to be quite unique. Moreover, in the formulation of \cite{PHF}, which we shall adopt hereafter, the third fermion family is treated asymmetrically with respect to the first two families. In a previous analysis \cite{cccf} we have presented results for the production of pairs of vector bileptons ($Y^{++}Y^{--}$) decaying into two same-sign lepton pairs, in conjunction with two jets at the LHC, at $\sqrt{s}=13$~TeV and relying on a full Monte Carlo implementation of the 331 model. Our study has been based on the selection of a specific benchmark point in the parameter space, where the $Y$ bileptons have mass $m_Y\simeq 875$~GeV. We have shown that, by setting appopriate cuts on the rapidities and transverse momenta of the final-state leptons and jets, it is possible to suppress the Standard Model background. By including jets in the final state, one obtains larger signal/background ratio, but nonetheless one has to face the issue of jet reconstruction. In this paper, we wish to extend the investigation in \cite{cccf}. In fact, in \cite{cccf} the explicit mass matrices of the scalars in a minimal version of the model, containing only 3 $SU(3)_L$ triplets, as well as the minimization conditions of the potential, were presented. However, the model also allows for doubly-charged Higgs-like scalars $(H^{++}, H^{--})$, which may give rise to multi-lepton final state, in the same manner as the vectors $Y^{\pm\pm}$ debated in \cite{cccf}. We shall therefore explore the production of same-sign lepton pairs at the LHC, mediated by both scalar and vector bileptons; unlike Ref.~\cite{cccf}, we shall veto final-state jets. The production of doubly-charged vector bilepton pairs at the LHC in jetless Drell--Yan processes was already investigated in \cite{dion,barreto,alves,nepo}. In particular, the authors of \cite{nepo} implemented the bilepton model in a full Monte Carlo simulation framework and obtained the exclusion limit $m_Y^{\pm\pm}>850$~GeV, by using the ATLAS rates at $\sqrt{s}=7$~TeV \cite{atlasold} on expected and observed high-mass same-sign lepton pairs and extending the results to 13 TeV and $\mathcal{L}=50~{\rm fb}^{-1}$. However, the analysis in \cite{nepo} assumes that the same-sign lepton-pair yields are independent of the bilepton spin: in fact, the ATLAS analysis in \cite{atlasold} was performed for scalar doubly-charged Higgs bosons, while the predictions in \cite{nepo} concerned vector bileptons. It is precisely the goal of the present exploration understanding whether, referring to the 331 realization in \cite{PHF}, one can separate vector from scalar bileptons at the LHC, in different luminosity regimes. A careful investigation on the production of doubly-charged particles at the LHC and the dependence of final-state distributions on the spin was undertaken in \cite{fuks1}. The authors of \cite{fuks1} considered an effective simplified model where the SM is extended by means of a $SU(2)_L$ group and the scalar, fermion and vector doubly-charged particles lie in the trivial, fundamental and adjoint representations of $SU(2)_L$, respectively. By looking at transverse-momentum and angular distributions and varying the bilepton mass betwen 150 and 350 GeV, it was found that it could be possible to distinguish at the LHC the particle spin. Compared to our previous study, in the present paper we shall explore a complete version of the model which includes both the $SU(3)_L$ triplet Higgses and the newly added scalar sextet sector, which is necessary in order to account for the masses of the leptons. The inclusion of the scalar sextet opens up the decay channels $H^{\pm\pm}\to l^\pm l^\pm$, which compete with the companion $Y^{\pm\pm}\to l^\pm l^\pm$ process. Many of the analytic expressions in the description of the sextet contributions, such as the rotation matrices appearing in the extraction of the mass eigenstates of this sector, cannot be given in closed analytical form, since they would be too lengthy. As in \cite{cccf} we shall choose a benchmark point of the model and present our results numerically: in particular, as we are interested in comparing vector- and scalar-bilepton signals, we shall set the doubly-charged $Y^{++}$ and $H^{++}$ masses to the same value. Vector- and scalar-bilepton production at hadron colliders was also explored in \cite{ramirez}, where the authors presented the total cross sections, expected number of events at the LHC, invariant-mass and transverse-momentum spectra for a few values of the bilepton mass. The decay properties of doubly-charged Higgs bosons in a minimal 331 model were also studied in \cite{tonasse}. It was found that, since the coupling to leptons is proportional to the lepton mass, such scalar bileptons mostly decay into $\tau$-lepton pairs. Also, according to \cite{tonasse}, the rate into $WW$ pairs is suppressed, being proportional to the vacuum expectation value of the Higgs giving mass to the neutrinos, as well as decays into leptons of different flavours, since the Yukawa couplings are diagonal. While the investigation in \cite{ramirez} was performed at leading order, by using the \texttt{FORM} package \cite{form} to calculate the ampitudes, we shall undertake a full hadron-level investigation. We will implement the 331 model, including the sextet sector, into \texttt{SARAH 4.9.3} \cite{sarah}, while the amplitudes for bilepton production at the LHC will be computed by the \texttt{MadGraph} code \cite{madgraph}; the simulation of parton showers and hadronization will be performed by using \texttt{HERWIG} 6 \cite{herwig}. Also, as will be thoroughly debated later on, in our model doubly-charged scalar bileptons decay in all lepton-flavour pairs with branching ratios 1/3, unlike Ref.~\cite{tonasse}, wherein the $\tau^+\tau^-$ mode had the largest rate. From the experimental viewpoint, to our knowledge, there has been no actual search for vector bileptons at the LHC, whereas the latest investigations on possible doubly-charged scalar Higgs boson production at the LHC were undertaken in \cite{atlashh} and \cite{cmshh} by ATLAS and CMS, respectively. In detail, the ATLAS analysis, performed at 13 TeV and with $36~{\rm fb}^{-1}$ of data, considered the so-called left-right symmetric model (LRSM, see, e.g. Refs.~\cite{pati1,pati2}) and its numerical implementation in \cite{spira}, where doubly-charged Higgs bosons can couple to either left-handed or right-handed leptons. In this framework, assuming that the $H^{\pm\pm}$ bosons only decay into lepton pairs, exclusion limits were set in the range 770-870 GeV for $H^{\pm\pm}_L$ and 660-760 GeV for $H^{\pm\pm}_R$. As for CMS, a luminosity of $12.9~{\rm fb}^{-1}$ was taken into account and limits between 800 and 820 were determined, always under the assumption of a 100\% branching ratio into same-sign lepton pairs. Our paper is organized as follows. In Section 2, we shall discuss the family embedding in the minimal 331 model, while Section 3 will be more specific on its scalar content, giving details on the triplet and sextet sectors, as well as on the lepton masses and physical Higgs bosons. Our phenomenological analysis will be presented in Section 5 and final comments and remarks will be given in Section 6. \section{Family embedding in the minimal 331} One of the main reasons for the appearance of exotic particles in the spectrum of the 331 model is the specific embedding of the hypercharge $Y$ in the $SU(3)_L\times U(1)_X$ gauge symmetry. The embeddings of $Y$ and of the charge operator $Q_{em}$ are obtained by defining them as linear combinations of the diagonal generators of $SU(3)_L$. We recall that in the 331 case this is defined by \begin{equation} {Y}_{\bf 3} =\beta T_8 + X \mathbf{1} \qquad {Y}_{\bf \bar{3}} =-\beta T_8 + X \mathbf{1} \end{equation} for the ${\bf 3}$ and the $\bar{\bf 3}$ representations of $SU(3)_L$, respectively, with generators $T_i=\lambda_i/2$ ($i=1,\ldots 8$), corresponding to the Gell-Mann matrices, and $T_8=\textrm{diag}\left[ \frac{1}{2 \sqrt{3}}( 1,1,-2)\right]$. The charge operator is given by \begin{equation} Q_{em, {\bf 3}}= Y_{\bf{3}} + T_3 \qquad Q_{em, \bar{\bf 3}}= Y_{\bar{\bf 3}} - T_3 \end{equation} in the fundamental and anti-fundamental representations of of $SU(3)_L$, respectively, where we have $T_3$=${\rm diag}\left[ \frac{1}{2} (1,-1,0)\right]$. We choose the $SU(2)_L\times U(1)_Y$ hypercharge assignments of the Standard Model as $Y(Q_L)=1/6$, $Y(L)=-1/2$, $Y(u_R)=2/3$, $Y(d_R)=-1/3$ and $Y(e_R)=-1$. Denoting by $q_X$ the particle charges under $U(1)_X$, the breaking of the symmetry $SU(3)_L\times U(1)_X \to SU(2)_L\times U(1)_Y$, for the fundamental representation {\bf 3} reads \begin{equation} ({\bf 3}, q_X) \to \left(2,\frac{\beta}{ 2 \sqrt{3}} + q_X\right) +\left(1, -\frac{\beta}{\sqrt{3}} + q_X\right), \end{equation} while for the representation ${\bf \bar{3}}$ \begin{equation} ({\bf\bar{3}}, q'_X) \to \left(2,-\frac{\beta}{2\sqrt{3}} + q'_X\right) +\left(1, +\frac{\beta}{\sqrt{3}} + q'_X\right). \end{equation} The $X$-charge is fixed by the condition that the first two components of the $Q_1$ and $Q_2$ triplets carry the same hypercharge as the quark doublets $Q_L=(u,d)_L$ of the Standard Model, yielding \begin{equation} \label{one} q_X=\frac{1}{6} - \frac{\beta}{2\sqrt{3}}. \end{equation} The $U(1)_{em}$ charge of the triplet will then be $Q_{em}(Q_1)$=diag$(2/3,-1/3,1/6 - \sqrt{3}\beta/2)$. Fermions with exotic charges will be automatically present in the case of $\beta=\sqrt{3}$, which is the parameter choice that we will consider herafter in our analysis. The first two families will then be assigned as \begin{equation} Q_1=\left( \begin{array}{c} u_L\\ d_L\\ D_L \end{array} \right),\quad Q_2=\left( \begin{array}{c} c_L\\ s_L\\ S_L \end{array} \right),\quad Q_{1,2}\in({\bf 3},{\bf 3}, -1/3) \end{equation} under $SU(3)_c \times SU(3)_L \times U(1)_X$. The charge operator $Q_{em}$ on $Q_{1,2}$ will then give \begin{equation} Q_{em}(Q_{1,2})=\textrm{diag} (2/3,-1/3,-4/3), \end{equation} with two exotic quarks $D$ and $S$ of charge -4/3. The third family is instead assigned as \begin{equation} Q_3=\left( \begin{array}{c} b_L\\ t_L\\ T_L \end{array} \right),\quad Q_3\in({\bf 3},{\bf \bar3}, 2/3), \end{equation} where the hypercharge content of the third exotic quark ($T_L$) is derived from the operator $(Y_{\bf \bar{3}})$ \begin{equation} Y_{\bf \bar{3}}(Q_3)= \textrm{diag}\left(-\frac{\beta}{2\sqrt{3}}+ q'_X,- \frac{\beta}{2\sqrt{3}}+ q'_X, \frac{\beta}{\sqrt{3}} + q'_X\right), \end{equation} with the condition $q'_X=1/6 +\beta/(2 \sqrt{3})$, giving \begin{equation} Y_{\bf \bar{3}}(Q_3)= \textrm{diag}\left(\frac{1}{6},\frac{1}{6},\frac{5}{3}\right) \end{equation} and \begin{equation} Q_{em \,\bf \bar{3}}(Q_3)= \textrm{diag}\left(-\frac{1}{3},\frac{2}{3},\frac{5}{3} \right). \end{equation} With these assignments, the charge of $T_L$ is $Q_{em}(T_L)= 5/3$, allowing to distinguish between the third and the first two generations of quarks. Right-handed singlet quarks in the 331 model carry the usual SM charges ($2/3$ and $-1/3$ for the $u$-type and $d$-type quarks), \begin{align} ({d_R}, { s_R},{ b_R})&\in ({\bf 3}, 1,- 1/3)\\ ( u_R, c_R, t_R) &\in({\bf 3}, 1, 2/3),\\ \end{align} with the exception of the three right-handed exotics \begin{align} ( D_R, S_R) &\in ({\bf 3}, 1, -4/3)\\ T_R &\in ({\bf 3}, 1, 5/3). \end{align} The lepton sector is assigned to the representation $\bar{3}$ of the same gauge group. Conversely from the quark sector, there is a democratic arrangement of the three lepton generations into triplets of $SU(3)_L$, \begin{equation}\label{lee} l=\left( \begin{array}{c} e_L\\ \nu_L\\ e_R^{\mathcal{c}} \end{array} \right),\quad l\in({\bf 1},{\bf \bar 3}, 0),\end{equation} with ${e}_R^{\mathcal{c}}=i \sigma_2 e_R^*$. In the following, we shall adopt for the leptons the notation $l_a^i$, where the subscripts ($a$, $b$ or $c$) refer to the lepton generation (electrons, muons and taus), and the superscripts ($i,j,k=1,2,3$) are $SU(3)_L$ indices. For example, the generation $a$ corresponds to electrons and the three elements of the triplet (\ref{lee}) are labelled as: \begin{equation} l^1_a=e_{a\, L},\ \ l^2_a=\nu_{a\, L},\ \ l^3_a=e_{a R}^{\mathcal{c}}. \end{equation} For the hypercharge operator we have the decomposition under $SU(2)_L\times U(1)_Y$ \begin{equation} Y_{\bf \bar{3}}(l)=\left(-\frac{\beta}{2 \sqrt{3}}+ q''_X,- \frac{\beta}{2 \sqrt{3}} + q''_X,\frac{\beta}{\sqrt{3}} + q''_X\right) \end{equation} with $q''_X=1/6 +\beta/(2 \sqrt{3})$ and $Q_{em}(L)$=diag$(-1,0,1)$. Both left- and right-handed components of the SM leptons are fitted into the same $SU(3)_L$ multiplet. The scalars of the 331 model, responsible for the electroweak symmetry breaking (EWSB), come in three triplets of $SU(3)_L$: \begin{equation} \rho=\left( \begin{array}{c} \rho^{++}\\ \rho^+\\ \rho^0 \end{array} \right)\in(1,3,1),\quad\eta=\left( \begin{array}{c} \eta^+\\ \eta^0\\ \eta^- \end{array} \right)\in(1,3,0),\quad\chi=\left( \begin{array}{c} \chi^0\\ \chi^-\\ \chi^{--} \end{array} \right)\in(1,3,-1). \end{equation} The breaking $SU(3)_L\times U(1)_X\to U(1)_{em}$ is obtained in two steps. The vacuum expectation value of the neutral component of $\rho$ causes the breaking from $SU(3)_L\times U(1)_X$ to $SU(2)_L\times U(1)_Y$; the usual spontaneous symmetry breaking mechanism from $SU(2)_L\times U(1)_Y$ to $U(1)_{em}$ is then obtained through the vevs of the neutral components of $\eta$ and $\chi$. Before closing this section, we remind that in the models \cite{PHF,PP} the coupling constants of $SU(3)_L$ and $U(1)_X$, namely $g_{3L}$ and $g_X$, are related to the electroweak mixing angle $\theta_W$ in such a way that $g_X(\mu)$ exhibits a Landau pole at a scale $\mu$ whenever $\sin^2\theta_W(\mu)= 1/4$ \cite{landau1}.\footnote{The relation between the $SU(3)_L$ and $U(1)_X$ couplings reads: $g_X^2/g_{3L}^2 =\sin^2 \theta_W / (1-4 \sin^2 \theta_W)$ \cite{landau1,landau2}.} Therefore, the theory may lose its perturbativity even at the scale about 3.5 TeV \cite{landau2}. Nevertheless, the typical energy scale of bilepton-pair production in \cite{cccf} and in this paper is somewhat smaller, i.e. $2m_{Y^{\pm\pm}}\simeq 1.75$~TeV. Therefore, we shall assume that the Landau pole does not pose any threat to the perturbative analysis carried out in the present work. \section{The scalar sectors} The model presented in the previous section exhibits the interesting feature of having both scalar and vector doubly-charged bosons, which is a peculiarity of the minimal version of the 331 model. In fact it is possible to consider various versions of the $SU(3)_c\times SU(3)_L\times U(1)_X$ gauge symmetry, usually parametrized by $\beta$ \cite{other331}. We discuss the case of $\beta=\sqrt3$, corresponding to the minimal version presented here, leading to vector boson with electric charge equal to $\pm 2$. Doubly-charged states are interesting by themselves because they can have distinctive features in terms of allowed decay channels, for example the production of same-sign lepton pairs. In the context of the minimal 331 model there is an even more interesting possibility. In fact, one can test whether a same-sign lepton pair has been produced by either a scalar or a vector boson. As we are going to explain, this feature will also shed light on the presence of a higher representation of the $SU(3)_c\times SU(3)_L\times U(1)_X$ gauge group, namely the sextet. \subsection{The triplet sector} In the previous section we have seen that the EWSB mechanism is realised in the 331 model by giving a vev to the neutral component of the triplets $\rho$, $\eta$ and $\chi$. The Yukawa interactions for SM and exotic quarks are obtained by means of these scalar fields and are given by: \begin{align} \mathcal{L}_{q, triplet}^{{Yuk.}}&=\big(y_d^1\; Q_1 \eta^* d_R + y_d^2\; Q_2 \eta^* s_R + y_d^3\; Q_3 \chi\, b_R^*\nonumber\\ &\quad + y_u^1\; Q_1 \chi^* u_R^* + y_u^2\; Q_2 \chi^* c_R^* + y_u^3\; Q_3 \eta\, t_R^*\\ &\quad + y_E^1\; Q_1\, \rho^* D_R^* + y_E^2\; Q_2\, \rho^* S_R^* + y_E^3\; Q_3\, \rho\, T_R^*\big) + \rm{h.c.},\nonumber \end{align} where $y^i_{d}$, $y^i_u$ and $y^i_E$ are the Yukawa couplings for down-, up-type and exotic quarks, respectively. The masses of the exotic quarks are related to the vev of the neutral component of $\rho=(0,0,v_\rho)$ via the invariants \begin{eqnarray} Q_1\, \rho^* D_R^*, Q_1\, \rho^* S_R^*&\sim & (3,3,-1/3)\times (1,\bar{3},-1)\times (\bar{3},1,4/3) \nonumber \\ Q_3\, \rho T_R^* &\sim& (3,\bar{3},2/3)\times (1,{3},1)\times (\bar{3},1,-5/3), \end{eqnarray} responsible of the breaking $SU(3)_c\times SU(3)_L\times U(1)_X \to SU(3)_c\times SU(2)_L\times U(1)_Y$. It is clear that, being $v_\rho\gg v_{\eta,\chi}$, the masses of the exotic quarks are ${\cal O}(\rm{TeV})$ whenever the relation $y_E^i\sim1$ is satisfied. \subsection{The sextet sector} The need for introducing a sextet sector can be summarised as follows. A typical Dirac mass term for the leptons in the SM is associated with the operator $\bar{l}_LH e_R$, with $l_L=(v_{eL},e_L)$ being the $SU(2)_L $ doublet, with the representation content $(\bar{2},1/2)\times (2,1/2)\times(1,-1)$ (for $l, H$ and $e_R$, respectively) in $SU(2)_L\times U(1)_Y$. In the 331 the $L$ and $R$ components of the lepton $(e)$ are in the same multiplet and therefore the identification of an $SO(1,3)\times SU(3)_L$ singlet needs two leptons in the same representation. It can be obtained (at least in part) with the operator \begin{eqnarray} \mathcal{L}_{l,\, triplet}^{Yuk}&=& G^\eta_{a b}( l^i_{a \alpha}\epsilon^{\alpha \beta} l^j_{b \beta})\eta^{* k}\epsilon^{i j k} + \rm{h. c.}\nonumber\\ &=& G^\eta_{a b}\, l^i_{a}\cdot l^j_{b}\,\eta^{* k}\epsilon^{i j k} + \rm{h. c.} \end{eqnarray} where the indices $a$ and $b$ run over the three generations of flavour, $\alpha$ and $\beta$ are Weyl indices contracted in order to generate an $SO(1,3)$ invariant ($l^i_{a}\cdot l^j_{b}\equiv l^i_{a \alpha}\epsilon^{\alpha \beta} l^j_{b \beta}$) from two Weyl fermions, and $i,j,k=1,2,3$, are $SU(3)_L$ indeces. The use of $\eta$ as a Higgs field is mandatory, since the components of the multiplet $l^j$ are $U(1)_X$ singlets. The representation content of the operator $l^i_a l^j_b$ according to $SU(3)_L$ is given by $3\times 3= 6 + \bar{3}$, with the $\bar{3}$ extracted by an anti-symmetrization over $i$ and $j$ via $\epsilon^{i j k}$. This allows to identify $l^i_a l^j_b \eta^{*k}\epsilon^{i j k}$ as an $SU(3)_L$ singlet. Considering that the two leptons are anticommuting Weyl spinors, and that the $\epsilon^{\alpha\beta}$ (Lorentz) and $\epsilon^{i j k}$ ($SU(3)_L$) contractions introduce two sign flips under the $a\leftrightarrow b$ exchange, the combination \begin{equation} M_{a b}=( l^i_{a}\cdot l^j_{b })\eta^{* k}\epsilon^{i j k} \end{equation} is therefore antisymmetric under the exchange of the two flavours, implying that even $G_{a b}$ has to be antisymmetric. However, an antisymmetric $G^\eta_{a b}$ is not sufficient to provide mass to all leptons. In fact, the diagonalization of $G^\eta$ by means a unitary matrix $U$, namely $G^\eta=U \Lambda U^\dagger$, with $G^\eta$ antisymmetric in flavour space, implies that its 3 eigenvalues are given by $\Lambda=(0,\lambda_{22}, \lambda_{33})$, with $\lambda_{22}=-\lambda_{33}$, i.e. one eigenvalue is null and the other two are equal in magnitude. At the minimim of $\eta$, i.e. $\eta=(0,v_\eta,0)$, one has: \begin{equation} G^\eta_{a b}M^{a b}=-Tr (\Lambda\, U M U^\dagger)= 2 v_{\eta}\lambda_{22}\, U_{2 a}\, l^{1}_{a}\cdot l^{3}_{b}\,U_{2 b}^* + 2 v_{\eta}\lambda_{33}\, U_{3 a}\, l^{1}_{a}\cdot l^{3}_{b}\,U_{3 b}^*, \end{equation} with $ l^1_a=e_{a L}$ and $l^3_b=e_{b R}^\mathcal{c}$. Introducing the linear combinations \begin{equation} E_{2 L}\equiv U_{2 a}\, l^1_a=U_{2 a} \, '\, e_{a L} \qquad U_{2 b}^*\, l^3_b=U_{2 b}^*\, e_{b R}^\mathcal{c} = i\sigma_2(U_{2 b} \,e_{b R})^*\equiv E_{2 R}^\mathcal{c}, \end{equation} then the antisymmetric contribution in flavour space becomes \begin{equation} \mathcal{L}_{l, \, triplet}^{Yuk}= 2 v_{\eta}\lambda_{22} \left( E_{2 L}E_{2 R}^\mathcal{c} - E_{3 L}E_{3 R}^\mathcal{c}\right), \end{equation} which is clearly insufficient to generate the lepton masses of three non-degenerate lepton families. We shall solve this problem by introducing a second invariant operator, with the inclusion of a sextet $\sigma$: \begin{equation} \sigma=\left( \renewcommand*{\arraystretch}{1.5} \begin{array}{ccc} \sigma_1^{++}&\sigma_1^+/\sqrt2&\sigma^0/\sqrt2\\ \sigma_1^+/\sqrt2&\sigma_1^0&\sigma_2^-/\sqrt2\\ \sigma^0/\sqrt2&\sigma_2^-/\sqrt2&\sigma_2^{--} \end{array} \right)\in(1,6,0), \end{equation} leading to the Yukawa term \begin{equation}\label{lag} \mathcal{L}_{l, sextet}^{{Yuk.}}= G^\sigma_{a b} l^i_a\cdot l^j_b \sigma^*_{i,j}, \end{equation} which allows to build a singlet out of the representation $6$ of $SU(3)_L$, contained in $l^i_a\cdot l^j_b$, by combining it with the flavour-symmetric $\sigma^*$, i.e. $\bar{6}$. Notice that $G^\sigma_{a b}$ is symmetric in flavour space. It is interesting to note that if one did not consider the sextet it would not be possible for a doubly-charged scalar to decay into same-sign leptons. In fact, if we leave aside the sextet contribution, the Yukawa for the leptons is related to the scalar triplet $\eta$ which does not possess any doubly-charged state. This means that revealing a possible decay $H^{\pm\pm}\to l^\pm l^\pm$ would be a distinctive signature of the presence of the sextet representation in the context of the 331 model. \subsection{Lepton Mass Matrices} The lepton mass matrices are of course related to the Yukawa interactions by the Lagrangian \begin{equation} \mathcal{L}_{l}^{{Yuk.}}=\mathcal{L}_{l, sextet}^{{Yuk.}} + \mathcal{L}_{l, triplet}^{{Yuk.}} + \rm{h. c.} \end{equation} and are combinations of triplet and sextet contributions. The structure of the mass matrix that emerges from the vevs of the neutral components of $\eta$ and $\sigma$ is thus given by: \begin{equation} \mathcal{L}_{l}^{{Yuk.}}=\left(\sqrt{2} \sigma_0 G_{a,b}^\sigma +2 v_\eta G^\eta_{a b}\right) (e_{a L}\cdot e_{b R}^\mathcal{c}) +\sigma_1^0 G^\sigma_{a b} \left(\nu_L^T i \sigma_2 \nu_L\right) +\rm{h. c.}, \end{equation} which generates a Dirac mass matrix for the charged leptons $M_{ab}^l$ and a Majorana mass matrix for neutrinos $M_{a b}^{\nu_l}$: \begin{equation}\label{mlgen} M^l_{a b} =\sqrt{2} \langle\sigma_0\rangle\, G_{a,b}^\sigma +2 v_\eta\, G^\eta_{a b} \qquad , \qquad M_{a b}^{\nu_l}=\langle \sigma^0_1\rangle \, G^\sigma_{a b}. \end{equation} In the expression above $\langle\sigma^0\rangle$ and $\langle\sigma_1^0\rangle$ are the vacuum expectation values of the neutral components of $\sigma$. For a vanishing $G^\sigma$, as we have already discussed, we will not be able to generate the lepton masses consistently, nor any mass for the neutrinos, i.e. \begin{equation}\label{mltip} M^l_{a b}=2 v_\eta\, G^\eta_{a b} \qquad , \qquad M^{\nu_l}=0. \end{equation} On the contrary, in the limit $G^\eta\to0$, Eq.~(\ref{mlgen}) becomes \begin{equation}\label{mlsext} M^l_{a b}=\sqrt{2} \langle\sigma_0\rangle\, G_{ab}^\sigma \qquad, \qquad M^{\nu_l}_{a,b}=\frac{\langle\sigma_1^0\rangle}{\sqrt2}G^\sigma_{a b}, \end{equation} which has some interesting consequences. Since the Yukawa couplings are the same for both leptons and neutrinos, we have to require $\langle\sigma_1^0\rangle\ll\langle\sigma^0\rangle$, in order to obtain small neutrino masses. For the goal of our analysis, we will assume that the vev of $\sigma_1^0$ vanishes, i.e. $\langle\sigma_1^0\rangle\equiv0$. Clearly, if the matrix $G^\sigma$ is diagonal in flavour space, from Eq. (\ref{mlsext}) we will immediately conclude that the Yukawa coupling $G^\sigma$ has to be chosen to be proportional to the masses of the SM leptons. An interesting consequence of this is that the decay $H^{\pm\pm}\to l^\pm l^\pm$, which is also proportional to $G^\sigma$, and therefore to the lepton masses, will be enhanced for the heavier leptons, in particular for the $\tau$, as thoroughly discussed in \cite{tonasse}. This is an almost unique situation which is not encountered in other models with doubly-charged scalars decaying into same-sign leptons \cite{spira}. However, for the sake of generality, in the following analysis we will consider the most generic scenario where both contributions $G^\sigma$ and $G^\eta$ are present. In this case the branching ratios of the doubly-charged Higgs decaying into same-sign leptons do not have to be proportional to the masses of the lepton anymore. In particular, after accounting for both $G^\sigma$ and $G^\eta$, configurations wherein even scalar bileptons have the same rates into the three lepton species are allowed, as occurs for vector bileptons. In the following, we shall hence concentrate our investigation on scenarios yielding \begin{equation}\label{br13} {\rm BR}(Y^{\pm\pm}\to l^\pm l^\pm)\simeq {\rm BR}(H^{\pm\pm}\to l^\pm l^\pm) \simeq 1/3 \end{equation} for $l=e,\mu,\tau$. The condition in Eq.~(\ref{br13}) is in fact particularly suitable to compare vector- and scalar-bilepton rates at the LHC and, for the time being, should be seen as part of our model. The assumption of having equal branchings of 1/3 in the decay of the scalar to all three lepton families allows to extend the universality of spin-1 bileptons also to the scalar sector, allowing to treat the two states (scalar and vector) on a similar footing. This option is clearly possible since we have 9 total parameters in the mass matrix which are constrained by 6 conditions. Three of them are necessary in order to reproduce the lepton masses and the remaining three come from the requirement of having equal values of the branching ratios of the scalars into the three lepton families. The explicit expressions of the solutions of such conditions are very involved and we have hence opted for a numerical scanning of the mass matrices satisfying such requirements. This in general requires that the ratio of the matrix elements of $G_{\eta}$ over those of $G_\sigma$ to be proportional to $v_\sigma/v_\eta \sim 10^{-2}$. Of course, if possible experimental data deviated significantly from ${\rm BR}( l^\pm l^\pm)=1/3$, then they would clearly favour a scalar bilepton, because lepton-flavour universality is mandatory for vector bileptons. \subsection{Physical Higgs bosons} The inclusion of the sextet representation in the potential enriches the phenomenology of the model and enlarges the number of physical states in the spectrum. In fact we now have, after electroweak symmetry breaking (EWSB) $SU(3)_L\times U(1)_X \to SU(2)_L\times U(1)_Y\to U(1)_{\rm{em}}$, five scalar Higgses, three pseudoscalar Higgses, four charged Higgses and three doubly-charged Higgses. The (lepton-number conserving) potential of the model is given by \cite{TullyJoshi} \begin{align}\label{pot} V&= m_1\, \rho^\dagger\rho+m_2\,\eta^\dagger\eta+m_3\,\chi^\dagger\chi +\lambda_1 (\rho^\dagger\rho)^2+\lambda_2(\eta^\dagger\eta)^2+\lambda_3(\chi^\dagger\chi)^2+\lambda_{12}\rho^\dagger\rho\,\eta^\dagger\eta\\ &\quad+\lambda_{13}\rho^\dagger\rho\,\chi^\dagger\chi+\lambda_{23}\eta^\dagger\eta\,\chi^\dagger\chi+\zeta_{12}\rho^\dagger\eta\,\eta^\dagger\rho+\zeta_{13}\rho^\dagger\chi\,\chi^\dagger\rho+\zeta_{23}\eta^\dagger\chi\,\chi^\dagger\eta\nonumber\\ &\quad + m_4\,Tr(\sigma^\dagger \sigma) + \lambda_{4} (Tr(\sigma^\dagger\sigma))^2 + \lambda_{14}\rho^\dagger\rho\,Tr(\sigma^\dagger\sigma) + \lambda_{24}\eta^\dagger\eta\,Tr(\sigma^\dagger\sigma) + \lambda_{34}\chi^\dagger\chi\,Tr(\sigma^\dagger\sigma) \nonumber\\ &\quad + \lambda_{44}Tr(\sigma^\dagger\sigma\,\sigma^\dagger\sigma)+ \zeta_{14} \rho^\dagger \sigma\,\sigma^\dagger \rho + \zeta_{24} \eta^\dagger\sigma\,\sigma^\dagger\eta + \zeta_{34} \chi^\dagger\sigma\,\sigma^\dagger\chi\nonumber\\ &\quad + (\sqrt2 f_{\rho\eta\chi} \epsilon^{ijk}\rho_i\, \eta_j\, \chi_k + \sqrt2 f_{\rho\sigma\chi} \rho^T\, \sigma^\dagger\, \chi \nonumber\\ &\quad+ \xi_{14}\epsilon^{ijk}\, \rho^{*l} \sigma_{li} \rho_j \eta_k + \xi_{24}\epsilon^{ijk}\epsilon^{lmn}\,\eta_i\eta_l\sigma_{jm}\sigma_{kn} + \xi_{34}\epsilon^{ijk}\,\chi^{*l}\sigma_{li}\chi_j\eta_k) + \rm{h.c.}\nonumber \end{align} The EWSB mechanism will cause a mixing among the Higgs fields; from Eq. (\ref{pot}) it is possible to obtain the explicit expressions of the mass matrices of the scalar, pseudoscalar, charged and doubly-charged Higgses, by using standard procedures. In the broken Higgs phase, the minimization conditions \begin{equation}\label{mincond} \frac{\partial V}{\partial v_\phi}=0, \quad \langle \phi^0\rangle=v_\phi, \quad \phi=\rho, \eta, \chi, \sigma \end{equation} will define the tree-level vacuum. We remind that we are considering massless neutrinos choosing the neutral field $\sigma_1^0$ to be inert. The explicit expressions of the minimization conditions are then given by \begin{align}\label{minpot1} m_1 v_\rho + \lambda_1 v_\rho^3 + \frac{1}{2}\lambda_{12}v_\rho v_\eta^2-f_{\rho\eta\chi} v_\eta v_\chi+\frac{1}{2}\lambda_{13}v_\rho v_\chi^2 - \frac{1}{\sqrt2}\xi_{14}v_\rho v_\eta v_\sigma + f_{\rho\sigma\chi}v_\chi v_\sigma&\\ + \frac{1}{2}\lambda_{14}v_\rho v_\sigma^2 + \frac{1}{4}\zeta_{14}v_\rho v_\sigma^2&=0\nonumber\\ m_2 v_\eta + \frac{1}{2}\lambda_{12}v_\rho^2 v_\eta +\lambda_2 v_\eta^3 - f_{\rho\eta\chi} v_\rho v_\chi +\frac{1}{2}\lambda_{23} v_\eta v_\chi^2 - \frac{1}{2\sqrt2}\xi_{14}v_\rho^2 v_\sigma+ \frac{1}{2\sqrt2}v_\chi^2 v_\sigma&\\ +\frac{1}{2}\lambda_{24} v_\eta v_\sigma^2-\xi_{24} v_\eta v_\sigma^2&=0\nonumber\\ m_3 v_\chi + \lambda_3 v_\chi^3 + \frac{1}{2} \lambda_{13} v_\rho^2 v_\chi - f_{\rho\eta\chi} v_\rho v_\eta +\frac{1}{2}\lambda_{23}v_\eta^2 v_\chi +\frac{1}{\sqrt2}\xi_{34}v_\eta v_\chi v_\sigma + f_{\rho\sigma\chi} v_\rho v_\sigma&\\ +\frac{1}{2}\lambda_{34} v_\chi v_\sigma^2 + \frac{1}{4}\zeta_{34} v_\chi v_\sigma^2&=0\nonumber\\ \label{minpot2} m_4 v_\sigma + \frac{1}{2}\lambda_{14}v_\rho^2 v_\sigma + \lambda_{44} v_\sigma^3 + \frac{1}{2}\lambda_4 v_\sigma^3 + f_{\rho\sigma\chi} v_\rho v_\chi - \frac{1}{2\sqrt2} \xi_{14} v_\rho^2 v_\eta + \frac{1}{2\sqrt2} \xi_{34} v_\eta v_\chi^2&\\ +\frac{1}{2}\lambda_{14}v_\rho^2 v_\sigma + \frac{1}{4}\zeta_{14} v_\rho^2 v_\sigma + \frac{1}{2}\lambda_{24} v_\eta^2 v_\sigma - \xi_{24} v_\eta^2 v_\sigma + \frac{1}{2} \lambda_{34} v_\chi^2 v_\sigma + \frac{1}{4} \zeta_{34} v_\chi^2 v_\sigma&=0 \nonumber \end{align} These conditions are inserted into the the tree-level mass matrices of the CP-even and CP-odd Higgs sectors, derived from $M_{ij}=\left.{\partial^2 V}/{\partial \phi_i\partial \phi_j}\right|_{vev},$, where $V$ is the potential in Eq.~(\ref{pot}): the explicit expressions of the mass matrices are too cumbersome to be presented here, although their calculation is rather straightforward. After a numerical diagonalization, we derive both the mass eigenstates and the Goldstone bosons. In this case we have 5 scalar Higgs bosons, one of them will be the SM Higgs with mass about 125 GeV, along with 4 neutral pseudoscalar Higgs bosons, out of which 2 are the Goldstones of the $Z$ and the $Z^\prime$ massive vector bosons. In addition there are 6 charged Higgses, 2 of which are the charged Goldstones and 3 are doubly-charged Higgses, one of which is a Goldstone boson. The Goldstones are exactly 8, as the massive vector bosons below the electroweak scale. Hereafter we shall give the schematic expression of the physical Higgs states, after EWSB, in terms of the gauge eigenstates, whose expressions contain only the vev of the various fields. In the following equations, ${\rm R}_{ij}^K\equiv{\rm R}_{ij}^K(m_1,m_2,m_3,\lambda_1,\lambda_2,\ldots)$ refers to the rotation matrix of each Higgs sector that depends on all the parameters of the potential in Eq.~(\ref{pot}). Starting from the scalar (CP-even) Higgs bosons we have \begin{eqnarray} H_i = {\rm R}_{i1}^S {\rm Re}\, \rho^0 + {\rm R}_{i2}^S {\rm Re}\, \eta^0 + {\rm R}_{i3}^S {\rm Re}\, \chi^0 +{\rm R}_{i4}^S {\rm Re}\, \sigma^0 + {\rm R}_{i5}^S {\rm Re}\, \sigma_1^0, \end{eqnarray} expressed in terms of the rotation matrix of the scalar components ${\rm R}^S$. There are similar expressions for the pseudoscalars \begin{eqnarray} A_i = {\rm R}_{i1}^P {\rm Im}\, \rho^0 + {\rm R}_{i2}^P {\rm Im}\, \eta^0 +{\rm R}_{i3}^P{\rm Im}\, \chi^0 +{\rm R}_{i4}^P {\rm Im}\, \sigma^0 +{\rm R}_{i5}^P {\rm Im}\, \sigma_1^0. \end{eqnarray} in terms of the rotation matrix of the pseudoscalar components ${\rm R}^P$. Here, however, we have two Goldstone bosons responsible for the generation of the masses of the neutral gauge bosons $Z$ and $Z^\prime$ given by \begin{align} A_0^1 &= \frac{1}{N_1}\left(v_\rho {\rm Im}\,\rho^0 -v_\eta {\rm Im}\,\eta^0 + v_\sigma {\rm Im}\,\sigma^0\right),\,\qquad N_1=\sqrt{v_\rho^2+v_\eta^2+v_\sigma^2}\ ;\\ A_0^2 &= \frac{1}{N_2}\left(-v_\rho {\rm Im}\,\rho^0+ v_\chi {\rm Im}\,\chi^0\right),\,\qquad N_2=\sqrt{v_\rho^2+v_\chi^2}. \end{align} For the charged Higgs bosons the interaction eigenstates are \begin{eqnarray} H_i^+ = {\rm R}^{C}_{i1}\rho^{+} + {\rm R}^{C}_{i2}(\eta^{-})^* + {\rm R}^{C}_{i3}\eta^{+} + {\rm R}^{C}_{i4}(\chi^{-})^* + {\rm R}_{i5}^C \sigma_1^+ + {\rm R}_{i6}^C (\sigma_2^-)^* \end{eqnarray} with ${\rm R}^C$ being a rotation matrix of the charged sector. Even in this case we have two Goldstones because in the 331 model there are the $W^\pm$ and the $Y^\pm$ gauge bosons. The explicit expressions of the Goldstones are \begin{align} H_{W}^+ &= \frac{1}{N_W} \left(-v_\eta\eta^+ + v_\chi (\chi^-)^* + v_\sigma (\sigma_2^-)^*\right),\,\qquad N_W=\sqrt{v_\eta^2 + v_\chi^2 + v_\sigma^2}; \\ H_{Y}^+ &= \frac{1}{N_Y} \left(v_\rho \rho^+ - v_\eta (\eta^-)^* + v_\sigma \sigma_1^+\right),\,\qquad N_Y=\sqrt{v_\rho^2 + v_\eta^2 + v_\sigma^2}. \end{align} In particular, we are interested in the doubly-charged Higgses, where the number of physical states, after EWSB, is three, whereas we would have had only one physical doubly-charged Higgs if we had not included the sextet. The physical doubly-charged Higgs states are expressed in terms of the gauge eigenstates and the elements of the rotation matrix ${\rm R}^C$ as \begin{eqnarray} H_i^{++} ={\rm R}^{2C}_{i1}\rho^{++} + {\rm R}^{2C}_{i2}(\chi^{--})^* + {\rm R}^{2C}_{i3}\sigma_1^{++} + {\rm R}^{2C}_{i4}(\sigma_2^{--})^*. \end{eqnarray} In particular, the structure of the corresponding Goldstone boson is \begin{eqnarray} H_0^{++} =\frac{1}{N}\left(-v_\rho\rho^{++} + v_\chi(\chi^{--})^* - \sqrt2v_\sigma\sigma_1^{++} + \sqrt2v_\sigma(\sigma_2^{--})^*\right) \end{eqnarray} where $N=\sqrt{v_\rho^2+v_\chi^2+4v_\sigma^2}$ is a normalization factor. \subsection{Vertices for $H^{\pm\pm}$ and $Y^{\pm\pm}$} In Fig.~\ref{jetless} we present the typical contributions to the partonic cross section of the process $p p\to B^{++}B^{--}$, where $B^{\pm\pm}$ denotes either a spin-0 or a spin-1 bilepton; each $B^{\pm\pm}$ decays into a same-sign lepton pair. From Fig.~\ref{jetless}, we learn that bilepton pairs can be produced in Drell--Yan processes mediated by either a vector boson ($V^0=\gamma,Z,Z'$) or a scalar neutral Higgs ($h_1\cdots h_5$); moreover, their production can be mediated by the exchange of an exotic quark $Q$ in the $t$-channel as well. In principle, we may even have $BB$ production via an effective vertex in gluon-gluon fusion, but this contribution turned out to be negligible with respect to the subprocesses with initial-state quarks. In the following, we wish to discuss the differences between the couplings of scalar Higgses and vector bosons to scalar and vector bileptons, as the production rates at the LHC crucially depend on such couplings. Considering first the case of vector $Y^{\pm\pm}$, the Lorentz structure of the $V^0(p^1)Y^{++}(p^2)Y^{--}(p^3)$ vertex is given in terms of the momenta by \begin{equation} V(p_\mu^1,p_\nu^2,p_\rho^3)=g_{\mu\nu}(p^2_\rho-p^1_\rho) + g_{\nu\rho}(p^3_\mu-p^2_\mu) + g_{\mu\rho}(p^1_\nu-p^3_\nu). \end{equation} Characterizing the vector boson $V$ as photon, $Z$ or $Z'$, we obtain: \begin{align}\label{vzeroyy} \gamma_\alpha\,Y_\mu^{++}\,Y_\nu^{--} &= -2 i g_2 \sin\theta_W\; V(p_\alpha^\gamma, p_\mu^{Y^{++}}, p_\nu^{Y^{--}})\nonumber\\ Z_\alpha\,Y_\mu^{++}\,Y_\nu^{--} &= \frac{i}{2}g_2 (1-2\cos 2\theta_W)\sin\theta_W\; V(p_\alpha^Z, p_\mu^{Y^{++}}, p_\nu^{Y^{--}})\\ Z_\alpha^\prime\,Y_\mu^{++}\,Y_\nu^{--} &= -\frac{i}{2}g_2 \sqrt{12-9\sec^2\theta_W}\; V(p_\alpha^{Z^\prime}, p_\mu^{Y^{++}}, p_\nu^{Y^{--}}),\nonumber \end{align} where $\theta_W$ is the Weinberg angle. In the case of the doubly-charged Higgs boson, the situation is slightly different: in fact, the interaction $V^0\, H^{++}\, H^{--}$ is generated after that the Higgses take a vev. The Lorentz structure of the coupling will be of course proportional to the difference of the momenta of the Higgs fields. Defining $S\left(p_\mu^1,p_\mu^2\right)= p^1_\mu - p^2_\mu,$ we have \begin{align}\label{vzerohh} \gamma_\alpha\,H_i^{++}\,H_j^{--}&= -i \sin\theta_W \Big[\left(g_2+g_1 \sqrt{\cot^2\theta_W -3}\right)\left({\rm R}^{2C}_{i1}{\rm R}^{2C}_{j1}+{\rm R}^{2C}_{i2}{\rm R}^{2C}_{j2}\right)\nonumber\\ &\qquad\qquad\qquad+2g_2\left({\rm R}^{2C}_{i3}{\rm R}^{2C}_{j3}+{\rm R}^{2C}_{i4}{\rm R}^{2C}_{j4}\right)\Big] S\left(p^{H_i^{++}}_\alpha, p^{H_j^{--}}_\alpha\right)\nonumber\\ & =-2 i e \delta_{i j}S\left(p^{H_i^{++}}_\alpha, p^{H_j^{--}}_\alpha\right)\\ Z_\alpha \,H_i^{++}\,H_j^{--}&=\frac{i}{2}\sec\theta_W\Big\{\cos 2\theta_W\left(g_2+g_1\sqrt{\cot^2\theta_W-3}\right)-g_1\sqrt{\cot^2\theta_W-3}){\rm R}^{2C}_{i1}{\rm R}^{2C}_{j1}\nonumber\\ &\qquad\qquad-2\big[\left(g_2+g_1\sqrt{\cot^2\theta_W -3}\right)\sin^2\theta_W {\rm R}^{2C}_{i2}{\rm R}^{2C}_{j2} -g_2\cos2\theta_W{\rm R}^{2C}_{i3}{\rm R}^{2C}_{j3} \nonumber \\ &\qquad\qquad+2g_2\sin^2\theta_W{\rm R}^{2C}_{i4}{\rm R}^{2C}_{j4}\big]\Big\}S\left(p^{H_i^{++}}_\alpha, p^{H_j^{--}}_\alpha\right)\\ Z^\prime_\alpha \,H_i^{++}\,H_j^{--}&=\frac{i}{2}\frac{\sec^2\theta_W}{\sqrt{12-9\sec^2\theta_W}}\Big\{\left[3g_1\sqrt{\cot^2\theta_W -3}(\cos2\theta_W-1)+g_2(2\cos2\theta_W-1)\right]{\rm R}^{2C}_{i1}{\rm R}^{2C}_{j1}\nonumber\\ &\qquad\qquad+\left[3g_1\sqrt{\cot^2\theta_W -3}(\cos2\theta_W-1)+2g_2(2\cos2\theta_W-1)\right]{\rm R}^{2C}_{i2}{\rm R}^{2C}_{j2}\nonumber\\ &\qquad\qquad +2g_2(2\cos2\theta_W-1)\left({\rm R}^{2C}_{i3}{\rm R}^{2C}_{j3}+2{\rm R}^{2C}_{i4}{\rm R}^{2C}_{j4}\right)\Big\}S\left(p^{H_i^{++}}_\alpha, p^{H_j^{--}}_\alpha\right).\nonumber\\ \end{align} The interactions shown in Eq.~(\ref{vzeroyy}) and Eq.~(\ref{vzerohh}) are clearly very different, both in their Lorentz structures and in their dependence on the parameters of the model; therefore, different decay rates $V^0,h_i\to B^{++}B^{--}$ are to be expected, according to whether $B$ is a scalar or a vector. It can be noticed that the expressions of the coupling in $\gamma\, Y^{++} Y^{--}$, i.e. $2g_2\sin\theta_W\equiv 2e$, is apparently very different from the $\gamma\, H^{++}H^{--}$ one, but one can show that, after simplifications, they turn out to be the same, as expected. The relevant vertices for vector bileptons are \begin{eqnarray} \ell \;\ell \; Y^{++}=\left\{ \begin{array}{cl} -\frac{i}{\sqrt2}g_2 \gamma^\mu& P_L\\ \frac{i}{\sqrt2}g_2 \gamma^\mu& P_R \end{array} \right.\label{lly} \end{eqnarray} \begin{eqnarray} \bar d \;T \; Y^{--}=\left\{ \begin{array}{cl} -\frac{i}{\sqrt2}g_2 \gamma^\mu &P_L\\ 0& P_R \end{array} \right.\label{dty} \end{eqnarray} \begin{eqnarray} \bar D \;u \; Y^{--}=\left\{ \begin{array}{cl} \frac{i}{\sqrt2}g_2 \gamma^\mu &P_L\\ 0& P_R \end{array} \right. \end{eqnarray} \begin{eqnarray} h_i\;Y^{++}Y^{--}=\frac{i}{2}g_2^2\left(v_\rho {\rm R}^S_{i1}+v_\chi{\rm R}^S_{i3}\right) \end{eqnarray} \begin{eqnarray} \gamma\;Y^{++}Y^{--}=-2i\,g_2\,\sin\theta_W \end{eqnarray} \begin{eqnarray} Z\;Y^{++}Y^{--}=\frac{i}{2}\,g_2\,(1-2\cos2\theta_W)\sec\theta_W \end{eqnarray} \begin{eqnarray} Z'\;Y^{++}Y^{--}=-\frac{i}{2}\,g_2\,\sqrt{12-9\sec^2\theta_W}. \end{eqnarray} In the equations above, $P_{L,R}$ are the usual left- and right-handed projectors $P_{L,R}=(1\mp\gamma_5)/2$. \section{Phenomenological analysis at the LHC} In this section we wish to present a phenomenological analysis, aiming at exploring possible scalar- or vector-bilepton signals at the LHC. \begin{figure}[t] \centering \mbox{\subfigure[]{ \includegraphics[width=0.225\textwidth]{figures/jetless1.pdf}}\hspace{.8cm} \subfigure[]{ \includegraphics[width=0.225\textwidth]{figures/jetless3.pdf}}\hspace{.8cm} \subfigure[]{\includegraphics[width=0.225\textwidth]{figures/jetless4.pdf}}} \caption{Typical contributions to events with two doubly-charged bosons in the final state and no extra jets. (a) and (b): contributions due to the mediation of a scalar (a) and a vector (b) boson. (c): $t$-channel exchange of exotic quarks $Q$.} \label{jetless} \end{figure} As in our previous work, we choose a specific benchmark point, obtained after scanning the parameter space by employing the \texttt{SARAH 4.9.3} program and its UFO \cite{ufo} interface. In doing so, we make use of the analytical expression of the mass matrices and of the minimization conditions of the potential in Eqs.~(\ref{minpot1})--(\ref{minpot2}). The mass eigenvalues are computed numerically, after varying all the quartic couplings in Eq.~(\ref{pot}) between -1 and 1 and the vacuum expectation value $v_\rho$, responsible of the first breaking of the 331 model, between 2 and 4 TeV. Possible benchmark points are then chosen in such a way that all particle masses are positive, the SM-like Higgs boson has mass at about 125 GeV and one is consistent with the current LHC exclusion limits on new physics scenarios. Moreover, as discussed in \cite{cccf}, we require the couplings of the lightest neutral Higgs boson to the Standard Model fermions and bosons to be the same as in the Standard Model, within 10\% accuracy. For the sake of comparison, we also choose the doubly-charged vectors and scalars to have roughly the same mass and just above the present ATLAS and CMS exclusion limits \cite{atlashh,cmshh} on doubly-charged Higgs bosons.. Furthermore, we are obviously interested in enhancing possible 331-model signals. Limiting ourselves to the Higgs and exotic sectors, the particle masses in our reference point are quoted in Table~\ref{bp}. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.4} \begin{tabular}{|c|c|c|} \hline\hline \multicolumn{3}{|c|}{Benchmark Point}\\ \hline \hline $m_{h_1}=126.3$ GeV & $m_{h_2}=1804.4$ GeV& $m_{h_3}=2474.0$ GeV\\ \hline $m_{h_4}=6499.8$ GeV & $m_{h_5}=6528.1$ GeV& \\ \hline $m_{a_1}=1804.5$ GeV& $m_{a_2}=6496.0$ GeV& $m_{a_3}=6528.1$ GeV\\ \hline $m_{h^\pm_1}=1804.5$ GeV& $m_{h^\pm_2}=1873.4$ GeV & $m_{h^\pm_3}=6498.1$ GeV \\ \hline $m_{h^{\pm\pm}_1}=878.3$ GeV& $m_{h^{\pm\pm}_2}=6464.3$ GeV & $m_{h^{\pm\pm}_3}=6527.7$ GeV \\ \hline $m_{Y^{\pm\pm}}=878.3$ GeV& $m_{Y^\pm}=881.8$ GeV & $m_{Z'}=3247.6$ GeV\\ \hline $m_D=1650.0$ GeV & $m_S=1660.0$ GeV& $m_T=1700.0$ GeV\\ \hline \hline \end{tabular} \caption{Benchmark point for our collider study, consistent with the $\sim 125$ GeV Higgs mass and the present exclusion limits on BSM physics.}\label{bp} \end{center} \end{table}\par From Table~\ref{bp}, we learn that the 331 model, as expected, after including the sextet sector, yields 5 neutral scalar ($h_1, \dots h_5$); 3 pseudoscalar ($a_1$, $a_2$ and $a_3$) and 3 singly-charged ($h^\pm_1$, $h^\pm_2$, $h^\pm_3$) Higgs bosons: the lightest $h_1$ is SM-like, whereas the others have mass between 1.8 and 6.5 TeV. In particular, $h_2$ is roughly degenerate with $a_1$ and $h^\pm_1$, while $h_4$, $h_5$, $a_2$, $a_3$, $h^\pm_2$ and $h^\pm_3$ have all mass about 6.5 TeV. As for doubly-charged particles, both $Y^{\pm\pm}$ and $h_1^{\pm\pm}$ have mass around 878 GeV, just above the current exclusion limit for doubly-charged scalars, while the other scalars $h_2^{\pm\pm}$ and $h_3^{\pm\pm}$ are in the 6.5 TeV range and the singly-charged vector $Y^\pm$ is roughly as heavy as the doubly-charged one. In our scenario, doubly-charged vectors and scalars decay only into lepton pairs, with branching ratio 1/3 for each lepton family ($ee$, $\mu\mu$ or $\tau\tau$). The exotic quarks $D$, $S$ and $T$ in our reference point have instead mass between 1.65 and 1.70 TeV. In principle, such exotic quarks can be produced in pairs at the LHC, with cross sections in the range of 0.5-0.7~fb at 13 TeV and 0.8-1.1~fb at 14 TeV, and may deserve a complete phenomenological analysis, especially in the high-luminosity LHC phase. Nevertheless, in the present paper we prefer to concentrate ourselves on the bilepton phenomenology and defer a thorough investigation on the production and decays of exotic quarks of charge 4/3 and 5/3 to future work \cite{ccgpp}. The $Z'$ boson deserves further comments. In \cite{dion} the relation \begin{equation}\label{zpy} \frac{m_{Y^{++}}}{m_{Z'}}\simeq \frac{\sqrt{3-12\sin^2\theta_W}}{2\cos\theta_W} \simeq 0.27 \end{equation} was determined between $Z'$ and vector-bilepton masses, and in fact in Table~\ref{bp} Eq.~(\ref{zpy}) is verified to a pretty good accuracy. Moreover, in our benchmark scenario the $Z'$ width is almost 700 GeV and, as found out in \cite{Dumm} when exploring $Z'$ bosons in 331 models, our $Z'$ is leptophobic. Therefore, the searches for $Z'$ bosons carried out so far by ATLAS \cite{atlaszp} and CMS \cite{cmszp}, which have set exclusion limits around 4 TeV on their mass, cannot be directly applied to our scenario, since such searches were mostly perfomed for narrow resonances decaying into dilepton final states \footnote{See, e.g., Ref.~\cite{araz} on how the $Z'$ exclusion limits are modified in leptophobic models.}. In our reference point, the $Z'$ decays dominantly into $q\bar q$ pairs, amounting to almost 70\% of the total width, and has a significant branching ratio into $Y^{++}Y^{--}$ pairs, about 14\%; its decay rate into doubly-charged scalars $h^{++}_1h^{--}_1$ is instead rather small, roughly 1\%. Such a difference can be easily explained in terms of the particle spins: the $Z'$ has spin 1 and therefore, in the decay into $h^{++}_1h^{--}_1$, only the amplitude where the $Z'$ has zero helicity with respect to the $h^{++}_1h^{--}_1$ axis contributes. On the contrary, in a possible decay into vector states $Z'\to Y^{++}Y^{--}$, amplitudes with helicity 0 and $\pm 1$ with respect to the $Y^{++}Y^{--}$ direction play a role. In the following, we shall present results for the production of two same-sign lepton pairs at the LHC, mediated by either vector or scalar bileptons in the 331 model: \begin{equation} pp\to Y^{++}Y^{--}(H^{++}H^{--})\to (l^+l^+)(l^-l^-), \label{signal} \end{equation} where $l=e,\mu$ and, for simplicity, we have denoted by $H^{\pm\pm}$ the lightest doubly-charged Higgs boson $h_1^{\pm \pm}$. The amplitude of process (\ref{signal}) is generated by the \texttt{MadGraph} code \cite{madgraph}, matched with \texttt{HERWIG} 6 for shower and hadronization \cite{herwig}. We have set $\sqrt{s}=13$~TeV and chosen the NNPDFLO1 parton distributions \cite{nnpdf}, which are the default sets in \texttt{MadGraph}. As in Ref.~\cite{cccf}, and along the lines of \cite{atlashh,cmshh}, we set the following acceptance cuts on the lepton transverse momentum ($p_T$), rapidity ($\eta$) and invariant opening angle ($\Delta R$): \begin{equation}\label{cuts} p_{T,l}>20~{\rm GeV}, \ |\eta_l|<2.5,\ \Delta R_{ll}>0.1. \end{equation} We point out that, since our signal originates from the decay of particles with mass almost 1 TeV, the final-state electrons and muons will be pretty boosted, and therefore the actual values of the cuts in (\ref{cuts}) are not really essential, especially the transverse-momentum cut. \footnote{Our cuts are in fact a conservative choice of the so-called overlap-removal algorithm implemented by ATLAS to discriminate lepton and jet tracks \cite{overlap}.} At 13 TeV LHC, after such cuts are applied, the LO cross sections, computed by \texttt{MadGraph}, read \begin{equation}\sigma(pp\to YY\to 4l)\simeq 4.3~{\rm fb}\ ;\ \sigma(pp\to HH\to 4l)\simeq 0.3~{\rm fb} .\end{equation} Once again, the difference in the cross sections can be explained in terms of the spin of the intermediate bileptons. In the centre-of mass frame, in fact, for scalar production, only the matrix element where the vector ($\gamma$, $Z$ and $Z'$) has helicity zero with respect to the $H^{++}H^{--}$ direction contributes; for decays into $Y^{++}Y^{--}$ final states also the $\pm 1$ helicity amplitudes are to be taken into account. For processes mediated by scalars ($h_i\to H^{++}H^{--}/Y^{++}Y^{--}$), the vector final states has still more helicity options since $Y^{++}$ and $Y^{--}$ can rearrange their helicities in a few different ways to achieve angular-momentum conservation, i.e. a total vanishing helicity in the centre-of-mass frame. We therefore confirm the findings of Ref.~\cite{ramirez}, where a higher cross section for vector-bilepton production with respect to the scalars was obtained at 7 and 14 TeV. As for the background, final states with four charged leptons may occur through intermediate $Z$-boson pairs: \begin{equation} pp\to ZZ\to (l^+l^-)(l^+l^-). \label{zz} \end{equation} After setting the same cuts as in (\ref{cuts}), the LO cross section of the process (\ref{zz}) is given by \begin{equation} \sigma(pp\to ZZ\to 4l)\simeq 6.1~{\rm fb}. \end{equation} In principle, within the backgrounds, one should also consider SM Higgs-pair production ($hh$), with $h\to l^+l^-$. However, because of the tiny coupling of the Higgs boson to electrons and muons, such a background turns out to be negligible. Assuming a luminosity of 300 fb$^{-1}$, the number of same-sign electron/muon pairs in processes mediated by $YY$, $HH$ and $ZZ$ are $N(YY)\simeq 1302$, $N(HH)\simeq 120$, $N(ZZ)\simeq 1836$. Defining the significance $s$ to discriminate a signal $S$ from a background $B$ as \begin{equation} s=\frac{S}{\sqrt{B+\sigma_B^2}}, \end{equation} $\sigma_B$ being the systematic error on B, which we estimate as $\sigma_B\simeq 0.1 B$, we find that the $YY$ signal can be separate from the $ZZ$ background with a significance $s\simeq 6.9$, while $HH$ production is overwhelmed by both Standard Model background ($s=0.6$) and possible vector-bilepton pairs ($s=0.9$). At 14 TeV, the cross sections read: $\sigma(YY)\simeq 6.0$~fb, $\sigma(HH)\simeq 0.4$~fb and $\sigma(ZZ)\simeq 6.6$~fb, leading to $N(YY)\simeq 17880$, $N(HH)\simeq 1260$ and $N(ZZ)\simeq 19740$ events with 3000 fb$^{-1}$ of data. Therefore, in the high-luminosity phase of the LHC, one will be able to discriminate vector-like bileptons from the background with a significance of about 9 standard deviations, while one is still unable to distinguish doubly-charged Higgses from $YY$ ($s=0.70$) or $ZZ$ ($s=0.64$) pairs. Besides total cross sections and significances, computed employing the foreseen number of events, it is instructive studying some final-state observables, in order to understand how one can possibly detect (mostly vector-like) bileptons at LHC. In Fig.~\ref{results} we present the transverse momentum of the hardest and next-to-hardest lepton ($p_{T,1}$ and $p_{T,2}$), the invariant opening angle between them ($\Delta R$), the rapidity of the hardest lepton ($\eta_1$), the invariant mass ($m_{ll}$) and the polar angle ($\theta_{ll}$) between same-sign leptons. In any figure, the results corresponding to $YY$ (black solid histogram) $HH$ (red dotted histogram) and $ZZ$ production (blue dashed histogram) are displayed. Unlike Ref.~\cite{cccf}, where all our spectra were normalized to 1, in Fig.~\ref{results} all distributions are normalized in such a way that the height of each bin, such as $N(p_T)$, yields the expected number of events for such values of $p_T$, $\eta$, $\Delta R$, $\theta$ and $m_{ll}$ for a luminosity of 300~fb$^{-1}$ and $\sqrt{s}=13$~TeV. As one could foresee from the very cross section and significance evaluations, the general feature of such spectra is that the 331 signal can be discriminated from the $ZZ$ background, while it is not possible to detect doubly-charged Higgs pairs as the leptonic spectra are always significantly below those yielded by $ZZ$ background and $YY$-pair production. As for the transverse momenta ($p_{T,1}$ and $p_{T,2}$), the $ZZ$ distributions are rather sharp and peak at low $p_T$, while those yielded by the $HH$ and $YY$ bileptons are much broader and peak about 1 TeV ($p_{T,1}$) and at roughly 700 ($p_{T,2}$, $YY$) and 800 GeV ($p_{T,2}$, $HH$). Such a result should have been expected, since the $Z$ decays into different-sign, while $Y^{\pm\pm}$ and $H^{\pm\pm}$ into same-sign electrons and muons. As anticipated, for every value of $p_T$, the $HH$ spectrum is well below the $YY$ one. Regarding the rapidity ($\eta_{1,l}$) distribution of the leading lepton, the 331-model spectra are narrower than the background and yield a larger event fraction around $\eta_{1,l}\simeq 0$. Once again, since the $ZZ$ background and the $YY$ signal predict a number of events of a similar order of magnitude, while those due to scalar pairs are much lower, the rapidity spectrum can be useful to detect possible vector bileptons, but not to separate them from doubly-charged Higgses. As for the polar angle between same-sign leptons $\theta_{ll}$, the $YY$-inherited spectrum is peaked around $\theta_{ll}\simeq 1.2\simeq 70^\circ$, while the background is much broader and maximum at about $\theta\simeq 0.7\simeq 40^\circ$; the Higgs-like signal is instead negligible. Concerning the same-sign lepton invariant mass $m_{ll}$, it is of course easy to discriminate the 331 signals, peaking at $m_{ll}\simeq 900$~GeV, from the $Z$-pair background, which is instead a broad distribution, significant up to about 350 GeV and maximum around 70 GeV. As to the bileptons, both invariant-mass spectra are pretty narrow, which reflect the fact that $Y^{++}$ and $H^{++}$ have widths roughly equal to 7 GeV and 400 MeV, respectively. The distribution of the invariant opening angle $\Delta R$ between the hardest and next-to-hardest leptons is rather broad for the background, significant for $0<\Delta R<6$, while $YY$ pairs yield a distribution in the range $1<\Delta R<5$, which, for $\Delta R\simeq 3$, even leads to more events that the background. The $HH$ signal is possibly visible only for $2<\Delta R<4$, but even in this range it is negligible with respect to the SM background and the $YY$ signal. Our conclusion is therefore that, as already argued in \cite{cccf}, the LHC will be sensitive to the spin-1 bileptons of the 331 model already at 13 TeV and 300~fb$^{-1}$, and even more in the high-luminosity regime. If the LHC does not see any bilepton, it may mean either that bileptons do not exist or that they are scalars, since we have shown that the production of doubly-charged Higgs bosons in the 331 model is overwhelmed by the SM background, as well as by vector bileptons. \begin{figure}[ht] \centering \mbox{\subfigure[]{ \includegraphics[width=0.450\textwidth]{figures/hhyy_pt1.pdf}}\hspace{.1cm} \subfigure[]{\includegraphics[width=0.450\textwidth]{figures/hhyy_pt2.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.450\textwidth]{figures/hhyy_mll.pdf}}\hspace{.1cm} \subfigure[]{ \includegraphics[width=0.450\textwidth]{figures/hhyy_deltar.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.450\textwidth]{figures/hhyy_eta.pdf}}\hspace{.1cm} \subfigure[]{\includegraphics[width=0.450\textwidth]{figures/hhyy_theta.pdf}}} \caption{Distributions of the transverse momentum of the hardest (a) and next-to-hardest lepton (b), same-sign lepton invariant mass (c), invariant opening angle between the two hardest leptons (d), rapidity of the leading lepton (e), polar angle between same-sign leptons (f). The solid blue histograms are the spectra yielded by vector bileptons, the red dots correspond to scalar doubly-charged Higgs bosons, the blue dashes to the $ZZ$ Standard Model background.} \label{results} \end{figure} \section{Discussion} We explored scalar ($H^{\pm\pm}$) and vector ($Y^{\pm\pm}$) bileptons in the framework of the 331 model, which has the appealing features of predicting anomaly cancellation and treating differently the third quark family with respect to the first two. We focused on the family embedding in the minimal 331 model and paid special attention to its scalar content, and especially to the sextet sector, whose presence enriches the particle spectrum and, in particular, leads to the prediction of doubly-charged scalar Higgs bosons. Such scalar bileptons can possibly compete with vector bileptons as a source of same-sign lepton pairs at the LHC. In fact, previous investigations on vector bileptons, such as \cite{nepo}, had put exclusion limits on the mass of $Y^{\pm\pm}$ exploiting the experimental searches for scalar $H^{\pm\pm}$, as if the bilepton spin had a negligible effect on the expected and observed limits at 95\% confidence level. We implemented the 331 model, including the new sextet content, in a full Monte Carlo simulation framework and chose a benchmark point of the parameter space, consistently with the present exclusion limits on BSM physics. We studied jetless events, with doubly-charged vectors and scalars produced at the LHC in Drell--Yan interactions mediated by photons, $Z$, $Z'$ and neutral Higgs bosons, as well as in processes where exotic quarks are exchanged in the $t$-channel. It was found that vector bileptons can be produced with a significant cross section already in the present LHC run at 13 TeV and that they can be easily discriminated from the SM background, by exploring distributions like the lepton transverse momentum, invariant mass, rapidity or invariant opening angle. The production of doubly-charged scalars is in principle interesting, but, because of the helicity suppression, its cross section is too low for them to be substantially visible at the LHC and separable from the background and the vector-bilepton signal. Our study therefore confirms that, as already anticipated in a previous analysis, the production of vector-bilepton pairs is the striking feature of the 331 model and we believe that, given the large cross section and easy separation from the background, with a significance between $6\sigma$ and $9\sigma$, it should deserve a full experimental search. As for doubly-charged scalars, although the 331 framework discussed in this work leads to a too low production cross section at the LHC, in order to achieve angular-momentum conservation, we plan to explore how much this conclusion depends on the actual setup for the family embedding and on the choice of the reference point, and whether there could be other realizations of the model yielding a visible LHC rate even for scalar bileptons. Besides, it will be very interesting to study the phenomenology of the exotic quarks predicted by our 331 model and the LHC significance reach, especially in the high-luminosity and high-energy phases. This is in progress as well. \section*{Acknowledgement} We acknowledge Antonio Sidoti for discussions on the cuts implemented in Eq.~(\ref{cuts}) and on the overlap-removal algorithm implemented by the ATLAS Collaboration. This work is partially supported by INFN `Iniziative Specifiche' QFT-HEP and ENP.
1505.05653
\section{Introduction} The standard approach to measuring the radio luminosity function (RLF) requires a sample with distance information to convert fluxes to luminosities. These distances typically come from a cross-match to existing optical redshift surveys. Millijansky radio sources have sky densities of a few 10s per square degree \citep{con98,mau03} hence require wide-field ($\gtrsim100$\,deg$^2$) spectroscopic surveys to build up significant statistics in the RLF. These samples exist in the local Universe (e.g. 6dF, SDSS) and in combination with wide-field mJy radio catalogues the local RLF has been shown to be a combination of two main populations: active galactic nuclei (AGN) with a double power law LF (similar to that of quasars e.g. \citealt{boy00}) at high luminosities and star-forming galaxies with a Schechter function LF at lower luminosities \citep{bes05,mau07}. Wide-field spectroscopic surveys are not deep enough to probe the overall galaxy population at higher redshift. Luminous red galaxies (LRGs) are bright enough to produce large samples up to $z\sim0.7$ \citep{eis01,can06}. While this is hardly a representative slice of the galaxy population, local surveys show that most radio sources in the $10^{24}<L<10^{26}$\,W/Hz regime (that translates to fluxes of $S\sim1$ to 100\,mJy at $z\sim0.7$) are associated with massive red galaxies \citep{c+b88,mau07}. Using LRGs, \citet{sad07} found evolution described well by shifting the AGN portion of the local RLF in the luminosity direction by $(1+z)^{2.0}$. These results were in broad agreement with \citet{c+j04} who used galaxies from the Sloan digital sky survey to show that fainter $L<10^{25}$\,W\,Hz$^{-1}$\,sr$^{-1}$ radio sources evolved more slowly than brighter ones up to $z\sim0.5$ Deep pencil-beam optical surveys offer higher-redshift galaxy samples that, when combined with deep radio imaging, constrain the sub-mJy RLF. At these lower flux densities ($\lesssim0.1$\,mJy) radio surveys become dominated by star forming galaxies \citep{sey08,pad09} that show strong luminosity evolution $\propto (1+z)^{\sim2.5}$ \citep{pad11,mca13}, with some contribution from radio-quiet AGN \citep{j+r04,sim06}. The lower-luminosity AGN found in these surveys show somewhat less evolution than the \citet{sad07} result. \citet{pad11} find no evolution in their AGN to $z\sim5$, and when they remove possible star-formation derived emission they find negative evolution. They suggest this may be a result of extremely high redshifts objects in their sample and the RLF cutting off and declining for $z\gtrsim 1-2$. Below these extreme redshifts \citet{smo09,mca13} find slow but significant evolution in their AGN: $\propto (1+z)^{1.2}$ and $(1+z)^{0.8}$ respectively. Small radio samples with complete spectroscopic coverage constrain the bright end of the RLF at high $z$ \citep{d+p90,wil01}. These studies show that at bright fluxes radio sources are found up to high redshift ($z\sim3$), indicating the difficulty in obtaining complete spectroscopy on large radio samples. They also found strong evolution in the RLF. \citet{wil01} used a combination of tiered radio samples with a faintest limit of \textcolor{black}{$S_{151MHz}>500$\,mJy to model the RLF.} They separated their LF model into two populations roughly separated by being above or below $L\sim 10^{26}$\,W/Hz. The lower luminosity population being primarily FRI objects or FRIIs that show little evidence for an AGN in the optical, and the higher luminosity sample containing bright FRII sources often associated with optical quasars. The brighter population's LF increases towards higher $z$ peaking at $z\sim 2$ and then falling. The fainter end is described by a Schechter function that increases until it reaches $z\sim1$ after which it remains stationary. In reality the lower luminosity population is poorly constrained for $z\gtrsim 1$ although further strong evolution is ruled out by source counts. Above $z\sim 0.7$ the mJy RLF is difficult to constrain. It lies between the parameter spaces constrained by pencil-beam surveys that run out of radio sources at higher flux densities, and the targeted surveys that require large amounts of telescope time to push fainter. In this regime the RLF has been estimated from samples that have semi-complete spectroscopic coverage supplemented by photometric redshifts. \citet{wad01} used a 1\,mJy limited sample of 72 galaxies with $65$\%\ spectroscopic completeness to show that the evolution of fainter $\sim10^{24}$\,W radio sources peaks later compared with brighter $\sim10^{26}$ sources. \citet{rig11} used a tiered sample that included the \citet{wad01} sample and pushed further down to 0.1\,mJy at 1.4\,GHz (for $z<1.3$) using photometric redshifts from the COSMOS field. They confirmed this differential evolution with radio luminosity analogous to `downsizing' seen in star formation rates and Xray/optical AGN. The flux range $1\lesssim S \lesssim 100$\,mJy is of particular interest since it samples the RLF in the luminosity regime where the bulk of the energy density from AGN is emitted from redshifts $0.5\lesssim z \lesssim 3.5$; the peak of AGN activity in the Universe. Hence this flux range is fundamental to our understanding of radio AGN and their impact on their surrounds. Constraining the LF in this parameter space is difficult and has thus far only been possible in small samples with incomplete spectroscopy. In this paper we look at an alternative approach. We use spectroscopic quasars as a tracer of the large-scale structure at high redshift and cross correlate these with the NVSS to determine the RLF. Throughout this work we will assume a standard flat $(\Omega_{\rm m},\Omega_{\Lambda})=(0.3,0.7)$, $h=0.7$ cosmology. The paper is organised such that we give the background to our technique in section~\ref{sec:oot}, in section~\ref{sec:data} we introduce the data sets we will be using for our analysis that is described in section~\ref{sec:anal}. In section~\ref{sec:results} we show our results and discuss their meaning for the RLF in section~\ref{sec:rlf} and high-redshift clustering in section~\ref{sec:clust}. We summarise our results in section~\ref{sec:sum}. \section{Review of technique} \label{sec:oot} \textcolor{black}{The technique we follow exploits a data set that has redshift information to constrain a sample that does not \citep[][]{phi85,p+s87}. This approach has been used for many years and has been recently exploited to reproduce the redshift distribution of photometric samples \citep{new08,mat10} and similarly to calibrate photometric redshifts \citep{sch10a}. Here we briefly outline the process we will follow as described in \citet{phi85} and \citet{p+s87}.} We begin with the real-space correlation function $\xi(r,z)$ defined such that, for a galaxy population with space density $\phi(L,z)$, the probability of finding a galaxy in a volume $\delta V$ a distance $r$ from an arbitrary galaxy is \begin{equation} \delta P = \phi(L,z)[1+\xi(L,z,r)]\delta V. \end{equation} In the linear haloe-haloe regime ($1\lesssim r \lesssim 100$\,Mpc) the correlation function is well-described by a power law $\xi = (r_0/r)^\gamma$ with $\gamma\sim 1.8$ (e.g. \citealt{pee80}). The angular statistic $\Sigma_{excess}$ is defined as the excess number of galaxies with luminosity $L$ to $L+\delta L$ within a projected radius $R$ of an arbitrary galaxy with known redshift $z$. Assuming a power law form for the correlation function, and that the evolution of $r_0$ and $\phi$ are minimal over a clustering length \begin{equation} \Sigma_{excess}(L,z) = \frac{2\pi G(\gamma)r_0^\gamma(L,z)R^{3-\gamma}\phi(L,z)}{3-\gamma}\delta L \label{equ:sig_exes} \end{equation} where $G$ is a constant defined by $\gamma$ (see \citealt{phi85}). Importantly $\Sigma_{excess}$ is trivial to measure between a sample with redshifts and one without. We may then constrain the clustering strength $r_0(z,L)$ and luminosity function $\phi(z,L)$ of a population with no redshifts. \section{data} \label{sec:data} In this paper we aim to measure the radio luminosity function and clustering strength of high-redshift radio sources. We do this by counting quasar--radio source pairs from a spectroscopic quasar sample that has redshifts and a radio catalogue that has none. We take the NVSS survey \citep{con98} as our parent sample of radio sources. The NVSS covers the whole sky north of $-30^\circ$, but for our purposes we are only interested in extragalactic sources and so cut out all objects with galactic latitude $|b|<10^\circ$ (as well as $Dec.<-30$). We also make a flux cut at 3\,mJy above which the NVSS is $\sim90$\,\%\ complete \citep{con98} leaving 1,062,117 radio sources in our sample, the vast majority of which have no distance estimate. The quasar sample we use is a combination of the Sloan Digital Sky Survey (SDSS) DR7 quasar catalogue \citep{sch10b} and the Baryon Oscillation Spectroscopic Survey (BOSS) DR10 quasar catalogue \citep{par13}. We combine the two samples since they cover different redshift ranges: DR7 $0.1<z<2$, DR10 $2<z<3.5$. Neither sample has a single consistent selection function and both are somewhat unevenly distributed across the sky. To flatten the DR7 catalogue we follow the simple cut made by \citep{sch10b} and only allow objects with $i<19.1$: the magnitude limit of the main SDSS quasar survey. For the BOSS sample we make a similar cut of $i<20$ where the number counts begin to turn over. Further cuts to these samples are required to construct accurate random catalogues as discussed in the next section. \begin{figure} \begin{center} \includegraphics[width=0.35\textwidth,angle=-90]{fig1.eps} \caption{ The redshift distribution of our final quasar sample. The fine lines show the DR7 (black) and DR10 (grey/red) samples separately and the heavy line shows the total. This demonstrates the extra redshift range from $z\sim 2$ to 3.5 made available by the inclusion of the DR10 quasars.} \end{center} \label{fig:zdist} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{fig2.eps} \caption{ The distribution of our final quasar samples in the sky. Red and Blue are the SDSS DR7 and BOSS samples respectively while the black line shows the dec.$>-30^\circ$ and galactic $|b|>10^\circ$ cuts that define the NVSS area we consider.} \end{center} \label{fig:sky} \end{figure} \subsection{Random catalogues} To estimate the excess $\Sigma_{excess}$ we need random catalogues matched to our radio and quasar samples to form a comparison. We create the random radio catalogue by generating random sky positions with $Dec.>-30^\circ$ and $|b|>10^\circ$. We also assign each source a flux drawn from the NVSS at random. In case of any variation in the flux distribution of the NVSS due to the changing beam with declination we discretise the NVSS catalogue into one degree declination strips and only draw a flux value for our random source from objects within the same declination strip. To create the random quasar catalogue we use {\sc mangle} and the `SDSS DR72' radial selection function from the mangle website \citep{ham04,bla05,swa08}. Note the DR72 mask was developed to reproduce the sky coverage of the main DR7 spectroscopic galaxy survey, and does not include additional fields that were in the DR7 quasar catalogue. Therefore we apply this mask to cut the area of our real DR7 quasar catalogue as well. We then produce a random catalogue from the DR72 mask with ten times the number of random objects as quasars. To make the random BOSS DR10 sample we again use {\sc mangle} and the same DR72 mask. Note this mask does not include approximately a quarter of the BOSS survey that was only covered photometrically after DR7. However, the DR72 mask reproduces the small-scale coverage of the survey and so we accept this loss of objects. To cut the DR72 area to just that observed by BOSS we take the field centers of the spectroscopic observations from the SDSS website and only include objects within $1.49^\circ$ of a field center. Again, we produce this random catalogue with ten times the number of objects as BOSS quasars. Our final quasar catalogue has 80,494 objects, 63,682 from DR7 and 16,812 from DR10. Figure~\ref{fig:zdist} shows the redshift distributions of the final samples split by their survey. Clearly the inclusion of the BOSS DR10 quasars extends the redshift coverage of our sample from $z\sim 2.2$ to 3.5. The distribution of these quasars on the sky along with the NVSS boundaries are shown in Figure~\ref{fig:sky} \section{Analysis} \label{sec:anal} The excess number of radio sources around quasars at a given redshift and radio luminosity (calculated assuming the redshift of the quasar), $\Sigma_{QR}(z,L)$, constrains the cross-clustering strength $r_{0QR}(z,L)$ and the radio luminosity function $\phi(z,L)$ (Equation~\ref{equ:sig_exes}). By assuming prior knowledge of either $r_{0QR}$\ or $\phi$ we can then constrain the other. In this section we describe models we will assume for $r_{0QR}$\ and our method for estimating $\Sigma$. \subsection{The clustering strength of quasars and radio sources} In reality the cross-clustering strength $r_{0QR}$\ that appears in Equation~\ref{equ:sig_exes} is rarely measured. More commonly the autocorrelation strengths of quasars, $r_{0QQ}$, or radio sources, $r_{0RR}$, are studied. We will relate these quantities by assuming linear bias such that $r_{0QR}^2 \sim r_{0QQ} \times r_{0RR}$ (e.g. \citealt{wak08b}) and model $r_{0QQ}$ and $r_{0RR}$ as a function of redshift and luminosity based on recent analyses. Studies of quasar clustering have repeatedly shown that the quasar correlation function is roughly independent of quasar luminosity \citep{daa08,sha11}. $r_{0QQ}$ increases slowly with redshift and, since the mass clustering is falling as redshift increases, the quasar bias rises quickly with redshift. Converting bias into mass via Press-Schechter theory implies that the average dark haloe mass of quasars is roughly constant with $M_{DH}\sim10^{12}$\,M$_\odot$ at all redshifts \citep{croom05,mye06,ros09}. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{fig3.eps} \caption{ The clustering strengths $(r_0)$ for the populations we are considering. Black points give the quasar autocorrelation strength from \citet{ros09} and the solid line is our simple fit to these. The dashed black line gives the constant radio galaxy autocorrelation strength from \citet{me11}. Grey (red online) lines show the correlations strengths assumed in the SKADS simulation: solid radio-quiet quasars, dashed FRIs and dotted FRIIs.} \end{center} \label{fig:Qr0_z} \end{figure} Assuming there is no variation in quasar clustering strength with luminosity we only need the variation with redshift. Figure~\ref{fig:Qr0_z} shows the autocorrelation clustering strength of quasars as a function of redshift from \citet{ros09}. We perform a $\chi ^2$ minimisation for evolution of the clustering strength assuming the quadratic form \begin{equation} r_0 = a + bz^2, \end{equation} \textcolor{black}{where $r_0$ has units of Mpc assuming $h=0.7$}. We find $a=6.8\pm 0.31$ and $b=0.63 \pm 0.26$. We will use this empirical fit to estimate $r_{0QQ}(z)$. The clustering of mJy radio sources has been extensively studies at redshifts below $\sim0.8$ with samples cross-matched to optical spectroscopic or photometric galaxies \citep{p+n91,bra05,wak08a,don10,me11,lin14a}. At the radio luminosities sampled in those surveys ($L \gtrsim 10^{24}$\,W/Hz) the radio population is dominated by AGN typically hosted by LRGs. \citet{me11} showed little evolution in the clustering strength of these objects with a constant $r_{0RR}\sim 11.5\,h^{-1}$\,Mpc \textcolor{black}{\citep{me11}}. This broadly matches the lack of clustering evolution seen in optically selected LRGs \citep{bel04,wak06,bro07}. At redshifts greater than 0.8 there are few indications of the clustering of radio sources due to the lack of wide-field optical galaxy samples in this redshift regime. \citet{lin14b} used photometric galaxy sample over a small field with deep radio observations to measure the cross correlation with IR galaxies. They found no variation in clustering strength with radio power and while their correlation increased with redshift this is primarily driven by evolution in their IR galaxy sample rather than the radio. The dependence of clustering strength on radio luminosity is not well described. In their angular correlation analysis of the NVSS \citet{ove03} derived a correlation scale length of $r_{0RR}\sim6\,h^{-1}$\,Mpc for lower luminosity sources $\lesssim 10^{26}$\,W/Hz while the brighter, potentially FRII, sources had a scale length $r_{0RR}\sim14\,h^{-1}$\,Mpc. On the other hand clustering analyses of radio surveys matched to optical galaxies have found no luminosity dependence in the large-scale haloe-haloe regime \citep{don10,me11,lin14b}. Note that since the \citet{ove03} had no redshift information a considerable series of assumptions about the radio population were required to derive their result, on the other hand both \citet{don10} and \citet{me11} struggled for sources in their samples with $L>10^{26}$\,W/Hz while \citet{lin14b} had none. Given the few constraints on $r_{0RR}$ we initially make the simplest empirical assumption. That is the clustering strength of our radio sample is constant with redshift and luminosity with $r_{0RR}=11.5\,h^{-1}$\,Mpc. Assuming linear bias and $r_{0QQ}(z)$ from \citet{ros09} this makes up our empirical (EMP) model for $r_{0QR}(L,z)$. As an alternative and check we also consider the values assumed in \citet{wilm08} when they were attempting to model the radio sky. They followed \citet{ove03} and assumed considerably stronger clustering for the brightest radio sources. For $z<1.5$ they assumed constant dark haloe masses of $10^{13}$ and $10^{14}h^{-1}$\,M$_\odot$ for FRI and FRII sources respectively. At high redshift the clustering strength of their FRII sources would become unphysically large and so for $z>1.5$ they held the bias of their FRI and FRII sources constant. We make the simplistic assumption that all radio sources with $L<10^{26}$\,W/Hz are FRI sources, the rest being FRIIs. For radio-quiet AGN, essentially quasars, they assumed a constant haloe mass of $3\times 10^{12}h^{-1}$M$_\odot$ with a similar redshift cut at $z=3$ above which the bias was held constant. We will refer to this alternative model for $r_{0QR}$ as the W08 model (see Figure~\ref{fig:Qr0_z} for a comparison of the differing models). \begin{figure} \begin{center} \includegraphics[width=0.37\textwidth,angle=-90]{fig4.eps} \caption{ The quasar--radio source angular correlation function of our sample. The vertical lines are at 2 and 20\,Mpc projected distance. The redshift limits for each bin are given in the bottom left of the panels.} \end{center} \label{fig:angcor} \end{figure} \subsection{Removing radio-loud quasars} The statistic $\Sigma_{excess}$ defined in section~\ref{sec:oot} is the excess number of radio sources around quasars. In the derivation of Equation~\ref{equ:sig_exes} it is assumed that this excess comes only from the clustering of matter. Radio-loud quasars in our sample increase the measured $\Sigma$ and bias our results. To illustrate this effect Figure~\ref{fig:angcor} shows the angular cross correlation function $w_{QR}(\theta,z)$ for our sample of quasars (split into redshift bins) and the NVSS catalogue. The vertical lines in the figure show 2 and 20\,Mpc projected onto the angular scale at the redshift of the bins. The upturn below $\sim 2$\,Mpc is caused by a combination of radio-loud quasars and non-linear single haloe clustering (e.g. \citealt{b+w02}). We remove the radio-loud contribution by only counting pairs in the annulus between $R=2$ and 20\,Mpc. In this region the angular correlation function is well fit by a single power law indicating that the spatial correlation function $\xi(r)$ is also approximately a power law. We choose 2\,Mpc as the lower limit both from inspection of Figure~\ref{fig:angcor} and since this corresponds to roughly the largest known giant radio galaxies \citep{sar05}. \subsection{Calculating $\Sigma$} To estimate $\Sigma$ we count all radio sources with a projected distance between 2 and 20\,Mpc from a quasar in our samples, $N_{DD}$. In addition to these data-data pairs we also substitute our random catalogues and count $N_{DR}$, $N_{RD}$ and $N_{RR}$. This is done in redshift and luminosity bins where the luminosity of the radio sources are calculated assuming the redshift of the quasar. Redshift bins are equally spaced over the interval sampled by our quasars $0.1<z<3.6$. Luminosity bins are logarithmically spaced over three orders of magnitude, the lower limit of which is the lowest luminosity observable in that redshift bin. The flux limit of the NVSS catalogue gives a Malmquist bias. Hence in our summations each pair is weighted by $V_{bin}/V_{max}$. Following \citet{ham93} we estimate $\Sigma$ with \begin{equation} \Sigma = \frac{1}{N_Q}(N_{DD} - N_{RR}N_{DR}/N_{RD}), \end{equation} where $N_Q$ is the total number of quasars in the redshift bin. From Equation~\ref{equ:sig_exes} we relate $\Sigma$ to the luminosity function and clustering strength with \begin{equation} \Sigma_{excess}(L,z) = \frac{2\pi G(\gamma)r_0^\gamma(L,z)(R_{max}^{3-\gamma}-R_{min}^{3-\gamma})\phi(L,z)}{3-\gamma} \delta L. \label{equ:ourS1} \end{equation} Throughout this paper we will assume $\gamma=1.8$ hence $G=3.678$, and use $R_{max},R_{min}=20,2$\,Mpc. Equation~\ref{equ:ourS1} becomes \begin{equation} \Sigma_{excess}(L,z) = 657 r_{0QR}^{1.8}(L,z)\phi(L,z) \delta L. \label{equ_sig_sim} \end{equation} We assume this simple relationship between $\Sigma$, $r_0$ and $\phi$ throughout the rest of this work. We use jack knife resampling to estimate our errors by splitting our sample into 20 even sized (by number of quasars) sub fields by right ascension. We then calculate $\Sigma$ in each sub field and estimate the `field-to-field' errors from the rms of these values for $\Sigma$ (e.g. \citealt{saw11,me11}). \section{Excess pair counts} \label{sec:results} \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{fig5.eps} \caption{ The average excess of radio sources around quasars, $\Sigma$, binned by redshift and luminosity. We show the plot with both a linear and logarithmic scale since several of the points are scattered below $\Sigma=0$ by noise.} \end{center} \label{fig:sigma} \end{figure} Figure~\ref{fig:sigma} shows the values of $\Sigma_{excess}$ we calculate from our sample for six redshift and five luminosity bins. The way $\Sigma$ is calculated means that it can be scattered to negative values due to noise. Hence we show both a linear and logarithmic scale to illustrate how the measured values and their errors behave. The points with $\Sigma<0$ and their errors still contain information about our sample and need to be included in any analysis to avoid introducing bias. Furthermore the error bars are symmetric and approximately Gaussian in linear space. Hence, while plots may be in log space, any fitting to the data is performed in linear space. It is apparent from Figure~\ref{fig:sigma} that at fixed radio luminosity $\Sigma(L,z)$ increases slightly with redshift. This indicates that one of $r_{0QR}$ or $\phi$ is increasing with redshift. \begin{figure*} \begin{center} \includegraphics[width=0.7\textwidth,angle=-90]{fig6.eps} \caption{ The radio luminosity function. Solid points assume the EMP clustering model while open points are W08. The solid line shows the \citet{wil01} RLF model and the dashed line shows our evolving power law model fit to the data at the midpoints of the bin (top-right in each panel). The redshift range for each bin is given at the top-right of each panel. The poor fit in the first redshift bin is due to the data being dominated by radio sources at the high-$z$ limit of the bin.} \end{center} \label{fig:LF1} \end{figure*} \section{The radio luminosity function} \label{sec:rlf} Figure~\ref{fig:LF1} shows the radio luminosity function we calculate for six redshift bins. We show the LF calculated assuming the EMP (solid points) and W08 (open points) clustering models. There is very little overall difference in the LFs between the clustering models. The solid lines in Figure~\ref{fig:LF1} show the \citet{wil01} RLF model at the mid-point redshift of the bin and it is clear that in general our results are consistent with their model. To describe our data further we initially fit an evolving power law $\phi = A (1+z)^\alpha(L/10^{26})^\beta$ and find $\alpha = 1.00 \pm 0.35$ for the EMP model and $1.02 \pm 1.05$ for W08. However, we find the large redshift and flux range we sample mean this is not an accurate model for our data. To better illustrate the redshift evolution in our data we fix the luminosity exponent to that from our fit $(\beta = -0.99 $EMP; -0.85 W08) and just fit for the amplitude of the power law in each redshift bin. Figure~\ref{fig:LF2} shows the amplitude of the fitted power law at $10^{26}$\,W/Hz as a function of redshift. \subsection{Redshift cutoff} Figure~\ref{fig:LF2} indicates that the increase in space density slows and may turnover at higher redshifts. To include this behaviour we introduce a \textcolor{black}{redshift limit and separate parameter for high-redshift evolution} \begin{equation} \Phi(L,z) = \bigg\{ \begin{array}{ll} (1+z)^{\alpha_l}(L/10^{26})^\beta & z\leq z_{\rm lim} \\ (1+z_{\rm lim})^{\alpha_l-\alpha_h}(1+z)^{\alpha_h}(L/10^{26})^\beta & z>z_{\rm lim}. \\ \end{array} \label{equ:pl_ev} \end{equation} Where $\alpha_h$ and $\alpha_l$ are the evolution parameters above and below $z_{\rm lim}$. Since there can be relatively rapid evolution we bin our data into 25 redshift and 10 luminosity bins. We fit our model with simple Markov chain Monte Carlo (MCMC) routine iterated 500,000 times to find the preferred values. We fix $\beta= -1$ in the fitting since there can be a degeneracy between the evolution parameter and $\beta$ due to our LFs being defined in different parts of luminosity space at different redshifts. Figure~\ref{fig:mcmc1} shows the probability distributions from our fitting using the EMP model, along with the best fit values. Clearly our data support strong $\alpha_h \sim 4$ evolution up to $z_{\rm lim}\sim 2$. At high redshift we only have an upper limit on the evolution parameter but can show at least that the increase in space density stops, or turns over. Interestingly this redshift cut-off is almost identical to that found by \citet{wil01} for their high-luminosity objects $z_{\rm cut-off}=1.91\pm0.16$. \begin{figure} \begin{center} \includegraphics[width=0.38\textwidth,angle=-90]{fig7.eps} \caption{ The evolution of the amplitude of the RLF from $z\sim 3.5$. The solid points are using the EMP model, the open points W08.} \end{center} \label{fig:LF2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{fig8.eps} \caption{ The results of our MCMC fitting of our RLF data. Blue and red lines show the 1 and 2$\sigma$ marginalised constrains respectively. We constrain all but the high redshift evolution parameter for which we can only obtain an upper limit..} \end{center} \label{fig:mcmc1} \end{figure} To compare further with \citet{wil01} we fitted out data using a parametrisation based on their models. Their model `C' is split into to population roughly separated at $L=26$\,W/Hz. We fit for $(1+z)^\alpha$ evolution with a redshift cut as in Equation~\ref{equ:pl_ev} for each population. In this fitting we convert from their cosmology to our own to make the fitted values comparable. In the high luminosity regime we have very few quasar-radio source pairs and consequently this part of parameter space is poorly constrained. On the other hand at low luminosities we find $(1+z)^{2.35\pm0.89}$ evolution to redshift $1.94\pm0.43$ above which we can only estimate an upper limit for the evolution parameter $\alpha_h<1.3$ (2$\sigma$). This contrasts with their findings of $\alpha_l=3.5$ up to a redshift cutoff at 0.7. \citet{wil01} have a considerably brighter sample than we use here and their redshift cut is imposed by their flux limit. It may be that since we are able to better define $z_{cut}$ this explains the smaller discrepancy between our evolution parameters. \subsection{Discussion} Our results are consistent with a model that evolves strongly, $\Phi\propto(1+z)^{3.7}$, to $z\sim 2$ above which the LF either stays constant or falls. Interestingly, this is approximately the same redshift evolution that \citet{wil01} found for their high-luminosity population of sources. The indication is that, rather than having two separate populations with a \textcolor{black}{transition} at $L\sim10^{26}$\,W, the radio LF may be dominated by a single population in the luminosity-redshift regime we are sampling. Recent studies of the RLF have focused on the accretion mechanisms that launch the radio jet and the role that the AGN may play in heating the intergalactic medium. Terminologies differ but we will refer to high-excitation radio galaxies (HERGs) associated with high-accretion rate optical AGN and low-excitation radio galaxies (LERGs) associated with substantially lower accretion rates via advection-dominated accretion flows in massive elliptical galaxies. At low redshifts LERGs dominate the LF below $L\sim10^{26}$W \citep{hec07,b+h12}. At these fainter luminosities and lower redshifts the LF has been shown to only evolve slowly \citep{c+j04,sad07,smo09,mca13} However, there is evidence that the HERG population evolves considerably more strongly than the LERGs \citep{wil01,bes14}, potentially becoming the dominant population in the luminosity range we sample around $z\sim1$. The 3\,mJy flux limit we impose allows us to sample luminosities of $10^{26}$\,W up to $z=2.5$. However, the strong evolution of the LF coupled with the increased comoving volume at high redshift means that we are dominated by sources at $z\sim2$. Assuming the simplistic power law LF from our MCMC fit, less than $5$\,\%\ of our sample is at $z<1$. The indication is that our signal is dominated by HERGs, and hence it may be unsurprising that we find almost identical evolution parameters to the factor of $\sim50$ brighter \citet{wil01} sample. \citet{wad01} and \citet{rig11} found that the fainter end of the RLF peaked in density at lower redshift. Due to the nature of our analysis we are always dominated by the radio sources close to our flux limit. Hence we cannot split the sample into luminosity bins to compare across a range of redshifts. We find a redshift cutoff at $z=1.95\pm0.22$. At this redshift our 3\,mJy flux limit translates to $\log(L/{\rm W})=25.77$, and so we can consider the turnover seen in our data to be due to radio sources at or somewhat brighter than this. At these luminosities \citet{rig11} found a redshift cut closer to $z=1$ although our results are consistent within a few sigma. \section{The clustering of radio sources} \label{sec:clust} \begin{figure} \begin{center} \includegraphics[width=0.34\textwidth,angle=-90]{fig9.eps} \caption{ The points with error bars show $r_{0QR}$ as a function of redshift for our sample. The dashed black line gives the implied value of $r_{0RR}$ assuming the empirical values of $r_{0QQ}$ from \citep{ros09}. Grey (red online) dashed and dotted lines show our EMP model values for $r_{0QQ}$ and $r_{0RR}$ respectively.} \end{center} \label{fig:Rr0_z} \end{figure} Reversing what we have done above we can integrate the \citet{wil01} luminosity function above our flux limit to give $\phi$ in Equation~\ref{equ_sig_sim} and hence constrain the clustering strength between the radio sources and quasars in our sample. Figure~\ref{fig:Rr0_z} shows the measured cross correlation strength in six redshift bins. While there may be some hint of an increase in clustering strength with $z$, $r_{0QR}$ is consistent with a constant value of $10.4\pm 2.5$\,Mpc over the full redshift range sampled (if poorly constrained at the highest redshifts). The dashed line in Figure~\ref{fig:Rr0_z} shows the value of $r_{0RR}$ assuming the empirical fit in Figure~\ref{fig:Qr0_z} and $r_{0QR}^2=r_{0QQ}r_{0RR}$. Again the estimates for $r_{0RR}$ are consistent with a constant value of $15.4$. At lower redshift ($z<1.5$) we find considerably stronger clustering in our sample compared to quasars, more in line with the results for radio galaxies from \citet{me11} and \citet{lin14a}. At higher redshifts both our errors and the clustering strength of quasars increase and we cannot form a distinction. Our results for $r_{0RR}$ are not consistent with the strong $20-25$\,Mpc values assumed in \citet{wilm08} for FRII sources. This is despite our being dominated by bright $L>10^{26}$\,W sources at high redshift and our sample potentially being dominated by HERGs/FRIIs at all redshift as discussed in our RLF analysis. None the less, we find are strong enough clustering to indicate these radio sources are in some of the most massive haloes at all redshifts we sample ($\sim 10^{14}$\,M$_\odot$ at $z\sim0$ to $\sim 10^{12.5}$\,M$_\odot$ at $z\sim3$). A possible explanation for this would be a later ($z<1.5$) break imposed in the \citet{wilm08} bias/clustering model, combined with our being dominated by fainter FRI/LERG sources at low redshift. Alternatively the clustering strength could depend strongly on luminosity and redshift to contrive to give our results, although this has not been noted before. \section{Summary} \label{sec:sum} \textcolor{black}{We measure the overdensity $\Sigma(z,L)$ of radio sources around spectroscpic quasars and relate this to evolution in the radio source population from $z\sim3.5$ to today. Our key results can be summarised:} {\bf $\Sigma(z,L)$ is measured in redshift/luminosity bins and we find significant evolution with redshift.} This can only be explained by either the RLF of clustering strength increasing to $z\sim2$. {\bf Under some simple models for $r_0(z,L)$ we find strong evolution, $\phi\propto (1+z)^{3.7\pm0.7}$, up to $z=1.9\pm0.2$ above which the evolution declines, although we can only constrain an upper limit.} These evolution parameters are consistent with those found by \citet{wil01} for the brighter radio source population. The indication may be that the same population of HERGs dominates the NVSS at all flux densities above $z\sim1$. {\bf Assuming the \citet{wil01} LF model we find the clustering strength of radio sources to be consistent with a value of $r_{0RR}=15.0 \pm 2.5$} This is inconsistent with quasars at low redshift and the W08 model for FRII clustering at intermediate ($1\lesssim z \lesssim 2$). A possible explanation would be the population being dominated by LERGs at low redshift and clustering more like quasars at higher redshift. Regardless, our results show that these radio sources are found in the most massive dark matter haloes at all redshift we sample. In this work we have demonstrated a technique that exploits a well defined sample with distance information to constrain the luminosity function and clustering of a sample without. The next generation of radio surveys will push still deeper beyond the flux limits of the NVSS used here. Despite new wide-field redshift surveys (e.g. EUCLID) the vast majority of the sources detected in these surveys will not have reliable distances. The method presented in this work offers an alternative approach to studying these populations with observational strategies that are already possible and, for the most part, have already been carried out. \section{Acknowledgments} SF and RJ would like to acknowledge SKA South Africa and the NRF for their funding support.
2108.12341
\section{Introduction} \IEEEPARstart{E}{lectric} power systems are shifting towards the use of more green technologies. To effectively integrate the renewable energy resources, energy storage systems, and electric loads into the power systems, they are interfaced with the grid via power electronic converters and are grouped in the form of microgrids (MGs) easing their control and management\cite{Microgrid}. As a key component of modern power systems, dc microgrids have recently become more attractive\cite{DCReviewQobad}. They are compatible with the dc electric nature of renewable energy resources, energy storage systems, and a majority of electric loads. In addition, compared to ac MGs where control of frequency, phase, reactive power, and power quality are big challenges, control and management of dc grids are inherently simpler\cite{DCReviewQobad}. In dc MGs, distributed generators (DGs) and loads are connected to the grid via power converters which are either \textit{voltage-controlled} (\textit{grid-forming}) or \textit{current/power-controlled} (\textit{grid-following}). Grid-forming devices adjust the voltage of their point of common coupling (PCC) to follow a given voltage reference. The grid-following devices, on the other hand, follow some current/power references\cite{Carolina2021}. Therefore, in terms of current/power, the grid-forming DGs and loads are \textit{dispatchable} while the grid-following ones are \textit{non-dispatchable} and can be considered as constant current/power loads. In autonomous MGs, normally a cluster of dispatchable (grid-forming) generators are in charge of shaping the desired voltage level; thus, they should control the dc MG in a collaborative effort. A common practice to dispatch the current and to adjust the voltage of grid-forming DGs in a decentralized, communication-free fashion is \textit{droop control}. Despite its simple and robust functionality, this primary controller cannot guarantee desired current-sharing and voltage formation between the DGs\cite{Han2019Review}. To address these shortcomings, the droop characteristic can be corrected by a secondary controller exploiting data exchange either between the DGs and a central control unit or only between the DGs. In the former case, the controller is known as \textit{centralized} secondary control which exhibits a single-point-of-failure to the system and requires a complex communication network between the DGs and the central control unit. Therefore, the distributed control techniques, using neighbor-to-neighbor inter-DG data transmissions, are preferred to the centralized ones\cite{MolzahnReview}. \subsection{Existing Literature and Research Gap} Distributed control of dc MGs has already been addressed in many works. A consensus-based proportional current-sharing strategy is proposed in\cite{Nasirian2015} where a dynamic consensus-based estimator is additionally employed to keep the average voltage at nominal value. To reduce the communication burden and to reach faster convergence under this controller, it has respectively been modified to event-triggered and finite-time versions in\cite{Sahoo2018ET} and \cite{Sahoo2019FT}. In\cite{Peng2020Opt}, a distributed optimal control scheme is proposed under which the DGs can achieve economic current-sharing. Therein, to overcome the initialization and noise robustness problems related to dynamic consensus-based estimation, a modified dynamic consensus-based average voltage observer is used which determines the voltage reference of converters so that the DG currents are shared properly. A somewhat similar control strategy to\cite{Nasirian2015}, but with event-triggered communications, is proposed in\cite{Peng2020ET} under which only information of the DGs' currents are communicated among them. In\cite{Renke2018}, a distributed nonlinear controller is proposed for dc MGs which, instead of droop controller, tunes the DGs voltages so their currents are shared proportionally. To bound the DG voltages within a reasonable range and to guarantee current-sharing among them \textit{to a certain degree}, in\cite{Renke2019}, a containment-consensus-based controller is proposed for dc MGs. A very similar containment-based controller, but with finite-time convergence, is also proposed in\cite{Sahoo2018CFT}. In the above mentioned works, either the electric network dynamics and electrical system are not taken into account or only a simplified linear algebraic representation of the grid is considered. Consequently, the controller design and system stability may depend on the parameters of the physical system which are subject to modelling uncertainties. One way to achieve plug-and-play (PnP) design and operation is to consider the overall system dynamics and to control the system based on energy principles. To do so, in\cite{Cucuzzella2018}, a distributed passivity-based control is proposed for buck-converter-based MGs ensuring proportional current-sharing and average voltage regulation among the DGs. Some similar versions of this controller are presented in\cite{Cucuzzella2019,Trip2018,Trip2019} which demonstrate superior transient system performance. It should be noted that the asymptotic stochastic stability of the controller proposed in\cite{Trip2019} has further been studied in\cite{Silani2020} under varying loads. To reach the desired control objectives in the mentioned works in a finite-time manner, some sliding mode controllers have been developed in\cite{Trip2018SM,Cucuzzella2019Robust}. Moreover, a few consensus-based proportional current-sharing and voltage-balancing controllers, facilitating PnP operations, have been proposed in \cite{Nahata2020ZIP,Nahata2020ZIE} for dc MGs with constant power and exponential loads where the existence and stability of the system equilibria is also studied. Following the same concept and for the sake of PnP functionality of the DGs, in\cite{Sadabadi2021}, a distributed dynamic control strategy is proposed for voltage balancing and proportional current-sharing among parallel buck converters with the same capacity. The aforementioned works are, however, limited to buck converter-based DGs and \textit{proportional current-sharing} and none of them has considered droop-controlled DGs and their \textit{economic current dispatch}. \subsection{Contributions} Motivated by the above literature review, a distributed secondary control strategy for dc MGs with ZIP loads is proposed herein with the following noticeable features. First, in the modeling of the power system, the dynamics of transmission lines and shunt capacitors are considered, loads--and also current/power-controlled DGs-- are modeled by constant-impedance-current-power (ZIP) loads, and the generators are characterized by droop-based grid-forming DGs, encompassing various types of interfacing converters. It is shown that the incremental model of the droop-based MG admits a port-Hamiltonian (pH) representation\cite{Ajan}, and its passive output is defined. Second, drawing inspiration from the Control by Interconnection (CbI) technique of pH systems\cite{OrtegaCBI}, a distributed consensus-based secondary controller is proposed which drives the MG to an equilibrium point where \textit{i)} the DGs share optimal currents, and \textit{ii)} their weighted-average voltage is the nominal voltage. The voltage weightings are directly related to coefficients of the DGs cost functions and not the electric network and loads. Third, regional asymptotic stability of the system with ZIP loads is demonstrated and it is shown that the system is globally asymptotically stable without the presence of constant power loads (CPLs). Finally, equilibrium analysis is conducted based on the concepts of economic dispatch and graph theory. The rest of this paper is structured as follows. The MG system modeling and the control aims are formulated in Section~II. Section~III presents the proposed controller and the system stability and equilibrium analyses. The case studies and simulation results are given in Section~IV. Finally, Section~V concludes the paper and discusses future research directions. Throughout the paper, $\mathbb{R}^{n\times m}$ and $\mathbb{R}^{n}$ stand for the set of $n\times m$ real matrices and $n\times 1$ real vectors, respectively. $\mathrm{diag}\{x_i\}$ indicates a diagonal matrix with $x_i$ being the corresponding diagonal arrays. $\mathrm{col}\{x_i\}$ shows a column vector with the arrays $x_i$. $\mathcal{I}$ is an identity matrix with appropriate dimensions. $\mathbf{0}$ and $\mathbf{1}$ are appropriate all-one and all-zero vectors or matrices. The transpose of a matrix/vector $\mathbf{z}$ is given by $\mathbf{z}^\top$. Given the scalar $x$ or the vector $\mathbf{x}$, $\bar{x}$ and $\bar{\mathbf{x}}$ are their value at the equilibrium point, and $\tilde{x}=x-\bar{x}$ and $\tilde{\mathbf{x}}=\mathbf{x}-\bar{\mathbf{x}}$. \section{Microgrid Modeling and Control Objectives} \subsection{Electric Network, Generators, and ZIP Load Models} Let $\mathcal{N}_e$, $\mathcal{E}_e$, and $\mathcal{G}_e$, with the cardinalities $n_e^\mathcal{N}$, $n_e^\mathcal{E}$, and $n_e^\mathcal{G}$, be the sets of buses, transmission lines, and grid-forming (voltage-controlled) generators, respectively. Suppose that the transmission lines are modeled by serial resistor-inductor pairs, the buses are modeled by shunt capacitors and ZIP loads, and each generator is modeled by a controllable voltage source which is connected to the grid via a transmission line (See Fig. 1). \begin{figure} \centering \begin{circuitikz}[american,scale=0.69,bigAmp/.style={amp, bipoles/length=1cm}] \ctikzset{bipoles/length=.69cm} \scriptsize \draw (0,2) to[cV, v=$V_i$,] (0,.5) to[short](0,0); \draw (0,2)to[short,i=$I_i^{\mathcal{G}_e}$](1,2) to[R=$R_i^{\mathcal{G}_e}$] (2,2) to[L=$L_i^{\mathcal{G}_e}$] (3,2) to[short,-*](3.5,2); \draw (0,0)to[short](3.5,0) to[C,l=$C_k^{\mathcal{N}_e}$,v<=$V_k^{\mathcal{N}_e}$] (3.5,2)to[short,-*](5,2)to[I,i>_=$I_k^\text{L}$](5,0)to[short](3.5,0); \draw (5,2)to[short,i=$I_j^{\mathcal{E}_e}$](6,2)to[R=$R_j^{\mathcal{E}_e}$] (7,2) to[L=$L_j^{\mathcal{E}_e}$] (8,2); \draw(6,.5)to[short,i=$ $](5,2); \draw (6,.5)to[R] (7,.5) to[L] (8,.5); \draw(5,0)to[short](8,0); \draw[dotted] (6.25,1.6)to[short,l=$\forall j\in\mathcal{E}_e$](6.25,.9); \draw[dotted] (7.75,1.6)to[short](7.75,.9); \draw[fill=gray!15] (8,-.2)rectangle(10,2.2)node[midway]{Rest of MG}; \draw[draw=none,fill=red!5] (-2.2,2.8)rectangle(1.2,4.2); \draw[-latex] (-1.4,1.25) -- (-.4,1.25)node[midway,above]{$V_i^\text{ref}$}; \draw[-latex] (-1,3.5)--(-1.4,3.5)node[midway,below]{$-$}; \draw[-latex] (-1.5,3.4)--(-1.5,1.35); \draw[-latex] (-2.6,1.25) -- (-1.6,1.25)node[near start,above]{$V_\text{nom}$}; \draw[dashed,blue] (-1.3,.8)rectangle(1.3,1.85); \node[blue] at (0,.53) {Converter Dynamics}; \node[blue] at (0,.2) {\& Internal Controllers}; \draw (0,3.5) to[bigAmp](-1,3.5); \node at (-.35,3.5){$R_i^D$}; \draw (-1.5,1.25) circle (0.1)node{+}; \draw (-1.5,3.5) circle (0.1)node{+}; \draw[-latex] (-1.5,4.2)--(-1.5,3.6)node[near end,left]{$u_i$}; \draw (0.85,2) ellipse (.07 and .16); \draw (0.85,2.16)--(0.85,3.5); \draw[-latex] (0.85,3.5)--(0,3.5)node[very near start,above]{$I_i^{\mathcal{G}_e}$}; \end{circuitikz} \caption{A droop-based DG connected to a microgrid with ZIP load.} \end{figure} The described electric network can be modeled as two graphs $\mathcal{M}_e$ and $\mathcal{M}_e^\mathcal{G}$ where the buses and transmission lines play the roles of their nodes and edges, respectively. Consider the graph $\mathcal{M}_e=(\mathcal{N}_e,\mathcal{E}_e,\mathcal{B}_e)$ where $\mathcal{N}_e=\{1,\cdots,n_e^\mathcal{N}\}$, $\mathcal{E}_e=\{1,\cdots,n_e^\mathcal{E}\}$, and $\mathcal{B}_e=[b_{kj}]\in \mathbb{R}^{n_e^\mathcal{N}\times n_e^\mathcal{E}}$ are its node set, edge set, and incidence matrix, respectively. Similarly, the graph $\mathcal{M}_e^\mathcal{G}=(\mathcal{N}_e,\mathcal{G}_e,\mathcal{B}_e^\mathcal{G})$ can be defined with the same node set but different edge set $\mathcal{G}_e=\{1,\cdots,n_e^\mathcal{G}\}$ and incidence matrix $\mathcal{B}_e^\mathcal{G}=[b_{ki}^{\mathcal{G}_e}]\in \mathbb{R}^{n_e^\mathcal{N}\times n_e^\mathcal{G}}$. An \textit{incidence matrix} describes the network graph topology by determining the connections between the bus voltages and line currents. For the electric network, one should first consider an \textit{arbitrary} current-flow direction for every line (edge); if current of $j$th line enters node $k$ then $b_{kj}=1$, if it leaves node $k$ then $b_{kj}=-1$, otherwise $b_{kj}=0$. Similarly, if $i$th DG injects current to bus $k$ via an output connector, then $b_{ki}^{\mathcal{G}_e}=1$; otherwise, $b_{ki}^{\mathcal{G}_e}=0$. Note that in this work, the generators are assumed to only inject current to the loads and network and not to absorb it, i.e., $b_{ki}^{\mathcal{G}_e}=-1$ is not considered. According to Fig. 1 and based on the system incidence matrices, the dynamics of the droop-based microgrid system are as follows. \begin{IEEEeqnarray}{rCl} L_i^{\mathcal{G}_e}\dot{I}_i^{\mathcal{G}_e}&=&V_i-{\sum}_kb_{ki}^{\mathcal{G}_e}V_k^{\mathcal{N}_e}-R_i^{\mathcal{G}_e}I_i^{\mathcal{G}_e},\IEEEyesnumber\IEEEyessubnumber\label{e1a}\\ L_j^{\mathcal{E}_e}\dot{I}_j^{\mathcal{E}_e}&=&-{\sum}_{k} b_{kj}V_k^{\mathcal{N}_e}-R_j^{\mathcal{E}_e}I_j^{\mathcal{E}_e},\IEEEyessubnumber\label{e1b}\\ C_k^{\mathcal{N}_e}\dot{V}_k^{\mathcal{N}_e}&=&{\sum}_{j}b_{kj}I_j^{\mathcal{E}_e}+{\sum}_{i}b^{\mathcal{G}_e}_{ki}I_i^{\mathcal{G}_e}-I_k^\text{L},\IEEEyessubnumber\label{e1c}\\ I_k^\text{L}&=&G_k^\text{cte}V_k^{\mathcal{N}_e}+I_k^\text{cte}+P_k^\text{cte}/V_k^{\mathcal{N}_e},\IEEEyessubnumber\label{e1d}\\ V_i&=&V_i^\text{ref}=V_\text{nom}-R_i^DI_i^{\mathcal{G}_e}+u_i,\IEEEyessubnumber\label{e1e} \end{IEEEeqnarray} where $L_i^{\mathcal{G}_e}$, $R_i^{\mathcal{G}_e}$, and $I_i^{\mathcal{G}_e}, \forall i\in\mathcal{G}_e$ are inductance, resistance, and current of $i$th generator transmission line; $L_j^{\mathcal{E}_e}$, $R_j^{\mathcal{E}_e}$, and ${I}_j^{\mathcal{E}_e}, \forall j\in\mathcal{E}_e$ are $j$th line inductance, resistance, and current; $C_k^{\mathcal{N}_e}$, $I_k^\text{L}$, and $V_k^{\mathcal{N}_e},\forall k\in \mathcal{N}_e$ are capacitance, load current, and voltage at bus $k$; $G_k^\text{cte}\geq 0$, $I_k^\text{cte}$, and $P_k^\text{cte}$ are respectively constant conductance, current, and power values of the ZIP load at bus $k$; $V_\text{nom}$, $V_i$, and $V_i^\text{ref}$ are nominal voltage and $i$th DG voltage and its reference value, respectively; $R_i^D$ and $u_i$ are respectively the droop coefficient and correction term (input) of $i$th generator. There are various types of internal current and/or voltage controllers for converters which are normally designed to be very fast. Hence, in secondary control and optimization design and studies, the following assumptions are usually required. \textit{Assumption 1:} The grid-forming (voltage-controlled) generators can be modeled as controllable voltage sources so that $V_i=V_i^\text{ref}$. Therefore, considering the well-known droop equation the grid-forming units are characterized by the algebraic relationship $V_i=V_i^\text{ref}=V_\text{nom}-R_i^DI_i^{\mathcal{G}_e}+u_i$. \textit{Assumption 2:} The grid-following (current-/power- controlled) converters are considered as negative constant current/power loads in the ZIP load model. Let us define the following global matrices and vectors. $\mathbf{L}_{\mathcal{G}_e}=\mathrm{diag}\{L_i^{\mathcal{G}_e}\}\in\mathbb{R}^{n_e^\mathcal{G}\times n_e^\mathcal{G}}$, $\mathbf{R}_{\mathcal{G}_e}=\mathrm{diag}\{R_i^{\mathcal{G}_e}\}\in\mathbb{R}^{n_e^\mathcal{G}\times n_e^\mathcal{G}}$, $\mathbf{L}_{\mathcal{E}_e}=\mathrm{diag}\{L_j^{\mathcal{E}_e}\}\in\mathbb{R}^{n_e^\mathcal{E}\times n_e^\mathcal{E}}$, $\mathbf{R}_{\mathcal{E}_e}=\mathrm{diag}\{R_j^{\mathcal{E}_e}\}\in\mathbb{R}^{n_e^\mathcal{E}\times n_e^\mathcal{E}}$, $\mathbf{C}_{\mathcal{N}_e}=\mathrm{diag}\{C_k^{\mathcal{N}_e}\}\in\mathbb{R}^{n_e^\mathcal{N}\times n_e^\mathcal{N}}$, $\mathbf{P}_\text{cte}=\mathrm{col}\{P_k^\text{cte}\}\in\mathbb{R}^{n_e^\mathcal{N}}$, $\mathbf{G}_\text{cte}=\mathrm{diag}\{G_k^\text{cte}\}\in\mathbb{R}^{n_e^\mathcal{N}\times n_e^\mathcal{N}}$, $\mathbf{R}_D=\mathrm{diag}\{R_i^D\}\in\mathbb{R}^{n_e^\mathcal{G}\times n_e^\mathcal{G}}$, $\mathbf{I}_\text{cte}=\mathrm{col}\{I_k^\text{cte}\}\in\mathbb{R}^{n_e^\mathcal{N}}$, $\mathbf{g}_{\mathcal{N}_e}(\mathbf{q}_{\mathcal{N}_e})=\mathrm{diag}\{-C_k^{\mathcal{N}_e}/V_k^{\mathcal{N}_e}\}\in\mathbb{R}^{n_e^\mathcal{N}\times n_e^\mathcal{N}}$, $\mathbf{I}_{\mathcal{G}_e}=\mathrm{col}\{I_i^{\mathcal{G}_e}\}\in\mathbb{R}^{n_e^\mathcal{G}}$, $\mathbf{I}_{\mathcal{E}_e}=\mathrm{col}\{I_j^{\mathcal{E}_e}\}\in\mathbb{R}^{n_e^\mathcal{E}}$, $\mathbf{V}_{\mathcal{N}_e}=\mathrm{col}\{V_k^{\mathcal{N}_e}\}\in\mathbb{R}^{n_e^\mathcal{N}}$, and $\mathbf{u}=\mathrm{col}\{u_i\}\in\mathbb{R}^{n_e^\mathcal{G}}$. Now, with the Hamiltonian $H(\mathbf{x})=0.5\mathbf{x}^\top\mathbf{Q}\mathbf{x}$ where \begin{IEEEeqnarray}{c} \mathbf{Q}=\begin{bmatrix} \mathbf{L}_{\mathcal{G}_e}^{-1} &\mathbf{0}&\mathbf{0}\\ \mathbf{0} & \mathbf{L}_{\mathcal{E}_e}^{-1}&\mathbf{0}\\ \mathbf{0} &\mathbf{0} & \mathbf{C}_{\mathcal{N}_e}^{-1} \end{bmatrix},\mathbf{x}=\begin{bmatrix}\boldsymbol{\phi}_{\mathcal{G}_e}\\\boldsymbol{\phi}_{\mathcal{E}_e}\\\mathbf{q}_{\mathcal{N}_e}\end{bmatrix}=\mathbf{Q}^{-1}\begin{bmatrix}\mathbf{I}_{\mathcal{G}_e}\\\mathbf{I}_{\mathcal{E}_e}\\\mathbf{V}_{\mathcal{N}_e}\end{bmatrix},\IEEEnonumber \end{IEEEeqnarray} one can write the system in the following form. \begin{IEEEeqnarray}{c} \Sigma :\begin{cases} \dot{\mathbf{x}} = \mathbf{F}\nabla H(\mathbf{x})+\mathbf{g}_P(\mathbf{x})\mathbf{P}_\text{cte}+\mathbf{g}\mathbf{u}+\mathbf{E}\\ \mathbf{y}=\mathbf{g}^\top\nabla H(\mathbf{x}) \end{cases},\IEEEyesnumber \end{IEEEeqnarray} \begin{IEEEeqnarray}{c} \mathbf{F} = \mathbf{J}-\mathbf{R}=\begin{bmatrix} -(\mathbf{R}_{\mathcal{G}_e}+\mathbf{R}_D) &\mathbf{0}&-{\mathcal{B}_e^\mathcal{G}}^\top\\ \mathbf{0} & -\mathbf{R}_{\mathcal{E}_e}&-\mathcal{B}_e^\top\\ \mathcal{B}_e^\mathcal{G}&\mathcal{B}_e & -\mathbf{G}_\text{cte} \end{bmatrix},\IEEEnonumber\\ \mathbf{g} = \begin{bmatrix} \mathcal{I} \\\mathbf{0}\\\mathbf{0}\end{bmatrix},\mathbf{g}_P(\mathbf{x}) = \begin{bmatrix}\mathbf{0} \\ \mathbf{0} \\ \mathbf{g}_{\mathcal{N}_e}(\mathbf{q}_{\mathcal{N}_e})\end{bmatrix},\mathbf{E} = \begin{bmatrix} \mathbf{1}V_\text{nom} \\\mathbf{0}\\-\mathbf{I}_\text{cte}\end{bmatrix};\IEEEnonumber \end{IEEEeqnarray} where $\mathcal{B}_e^\mathcal{G}$ and $\mathcal{B}_e$ are the incidence matrices defined in the preamble of this subsection; $\mathbf{J}=-\mathbf{J}^\top=0.5[\mathbf{F}-\mathbf{F}^\top]$ and $\mathbf{R}=\mathbf{R}^\top=-0.5[\mathbf{F}+\mathbf{F}^\top]$) are the skew-symmetric and symmetric component of $\mathbf{F}$, respectively. \textit{Assumption 3:} The system $\Sigma$ (2) has a unique equilibrium point $\bar{\mathbf{x}}$. Moreover, $\bar{\mathbf{u}}$ (resp. $\bar{\mathbf{y}}$) are the \textit{equilibrium control} (resp. \textit{equilibrium output}) of (2) at the equilibrium point where \begin{IEEEeqnarray}{c} \begin{cases} \mathbf{0} = \mathbf{F}\nabla H(\bar{\mathbf{x}})+\mathbf{g}_P(\bar{\mathbf{x}})\mathbf{P}_\text{cte}+\mathbf{g}\bar{\mathbf{u}}+\mathbf{E}\\ \bar{\mathbf{y}}=\mathbf{g}^\top\nabla H(\bar{\mathbf{x}}) \end{cases}.\IEEEnonumber \end{IEEEeqnarray} The \textit{incremental model} of the system $\Sigma$ for $\tilde{\mathbf{x}}=\mathbf{x}-\bar{\mathbf{x}}$ and $\tilde{\mathbf{u}}=\mathbf{u}-\bar{\mathbf{u}}$ can then be written as the PH system below. \begin{IEEEeqnarray}{c} \Tilde{\Sigma} : \begin{cases} \dot{\tilde{\mathbf{x}}} = [\mathbf{J}-\tilde{\mathbf{R}}(\tilde{\mathbf{x}})]\nabla H(\tilde{\mathbf{x}})+\mathbf{g}\tilde{\mathbf{u}}\\ \tilde{\mathbf{y}}=\mathbf{g}^\top\nabla H(\tilde{\mathbf{x}}) \end{cases},\IEEEyesnumber\label{e8}\\ \tilde{\mathbf{R}}(\tilde{\mathbf{x}}) = \begin{bmatrix} \mathbf{R}_{\mathcal{G}_e}+\mathbf{R}_D &\mathbf{0}&\mathbf{0}\\ \mathbf{0} & \mathbf{R}_{\mathcal{E}_e}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}& \mathbf{G}_\text{cte}-\mathbf{G}_P(\tilde{\mathbf{q}}_{\mathcal{N}_e}) \end{bmatrix},\IEEEnonumber \end{IEEEeqnarray} where $\mathbf{G}_P(\tilde{\mathbf{q}}_{\mathcal{N}_e})=\mathrm{diag}_k\{G_k^P(\tilde{q}_k^{\mathcal{N}_e})\}$ with $G_k^P(\tilde{q}_k^{\mathcal{N}_e})=P_k^\text{cte}(C_k^{\mathcal{N}_e})^2/[\bar{q}_k^{\mathcal{N}_e}(\tilde{q}_k^{\mathcal{N}_e}+\bar{q}_k^{\mathcal{N}_e})]$. \textit{Proposition 1:} With the storage function $H(\tilde{\mathbf{x}})=0.5\tilde{\mathbf{x}}^\top\mathbf{Q}\tilde{\mathbf{x}}$, and the passive output $\tilde{\mathbf{y}}$ with respect to $\tilde{\mathbf{u}}$, the system $\Tilde{\Sigma}$ (3) is passive in the following domain. \begin{IEEEeqnarray}{rCl} \mathbb{D}&=&\{\tilde{\mathbf{x}}\in\mathbb{R}^{n_e^\mathcal{G}+n_e^\mathcal{E}+n_e^\mathcal{N}}:G_k^\text{cte}>\frac{P_k^\text{cte}(C_k^{\mathcal{N}_e})^2}{\bar{q}_k^{\mathcal{N}_e}(\tilde{q}_k^{\mathcal{N}_e}+\bar{q}_k^{\mathcal{N}_e})}\}.\IEEEyesnumber \end{IEEEeqnarray} \textit{Proof:} Since $\mathbf{J}=-\mathbf{J}^\top$, the derivative of the storage function along the trajectories of (3) is \begin{IEEEeqnarray}{c} \dot{H}(\tilde{\mathbf{x}})=-(\nabla H(\tilde{\mathbf{x}}))^\top\tilde{\mathbf{R}}(\tilde{\mathbf{x}})\nabla H(\tilde{\mathbf{x}})+\tilde{\mathbf{y}}^\top\tilde{\mathbf{u}} .\IEEEyesnumber \end{IEEEeqnarray} On the other hand, the matrix $\tilde{\mathbf{R}}(\tilde{\mathbf{x}})$ is positive definite for all $\tilde{\mathbf{x}}\in\mathbb{D}$. Therefore, the system $\Tilde{\Sigma}$ (3) is passive with the given storage function\cite{Romeo2001}. \subsection{Economic Dispatch and Near-Nominal Voltage Formation} Let $\mathcal{C}_i(I_i^{\mathcal{G}_e})=\alpha_i (I_i^{\mathcal{G}_e})^2+\beta_i(I_i^{\mathcal{G}_e})+\gamma_i$ be $i$th generator cost function, where $\alpha_i$, $\beta_i$, and $\gamma_i$ are its parameters. If $I_\text{demand}$ is the total current demand in the power network, then the economic current dispatch problem can be written as the following optimization problem. \begin{IEEEeqnarray}{c} \min{\sum}_{i\in\mathcal{G}_e}\mathcal{C}_i(I_i^{\mathcal{G}_e}),\quad\text{s.t.}\quad{\sum}_{i\in\mathcal{G}_e}I_i^{\mathcal{G}_e}=I_\text{demand}.\IEEEnonumber \end{IEEEeqnarray} This optimization problem can be solved by Lagrangian method with the following Lagrangian function\cite{Boyd}. \begin{IEEEeqnarray}{rCl} L(\mathbf{I}_{\mathcal{G}_e},\lambda)&=&{\sum}_{i\in\mathcal{G}_e}\mathcal{C}_i(I_i^{\mathcal{G}_e})+\lambda(I_\text{demand}-{\sum}_{i\in\mathcal{G}_e}I_i^{\mathcal{G}_e}),\IEEEnonumber \end{IEEEeqnarray} where $\lambda$ is \textit{dual variable} or \textit{Lagrange multiplier}. The primal problem is convex; hence, if Slater's condition is satisfied, then the Karush-Kuhn-Tucker (KKT) conditions provide necessary and sufficient conditions for primal-dual optimality of the points as follows\cite{Boyd}. \begin{IEEEeqnarray}{lCl} \text{Primal feasibility:}&\quad&\partial L/\partial \lambda=0,\IEEEnonumber\\ \text{Stationary condition:}&\quad&\partial L/\partial I_i^{\mathcal{G}_e}=0,\forall i\in\mathcal{G}_e.\IEEEnonumber \end{IEEEeqnarray} This implies that considering a feasible equality constraint in the problem, the KKT optimality conditions are boiled down to the stationary condition\cite{Boyd} \begin{IEEEeqnarray}{c} {\lim}_{t\rightarrow \infty}\lambda_i=\lambda_j=\lambda_\text{opt} \end{IEEEeqnarray} where $\lambda_i=\partial\mathcal{C}_i/\partial I_i^{\mathcal{G}_e}=2\alpha_iI_i^{\mathcal{G}_e}+\beta_i$ is the incremental cost (Lagrange multiplier) of $i$th DG, and $\lambda_\text{opt}$ is its optimal value. This condition is known as equal incremental costs (EIC) criteria\cite{Wood2013}. Due to the fact that current sharing in power networks depends on the bus-voltage differences and not the absolute values of voltages, theoretically speaking, the above mentioned optimality condition can be satisfied in many voltage levels; i.e., $\lambda_\text{opt}$ can have various values depending on the weighted average of voltages. However, in practice the voltages must be as close as possible to the network's nominal voltage. Therefore, the controller should also guarantee a near-nominal voltage formation which can be formulated as \begin{IEEEeqnarray}{rCl} \lim_{t\rightarrow \infty} {\sum}_{i\in\mathcal{G}_e} w_iV_i&=&V_\text{nom}{\sum}_{i\in\mathcal{G}_e} w_i. \end{IEEEeqnarray} where $w_i>0,\forall i\in\mathcal{G}_e$ are voltage weightings which are defined later. \textit{Remark 1:} A special choice of the cost function parameters is $\alpha_i=0.5/I_i^\text{rated}$, $\beta_i=\gamma_i=0$ which turns (6) into the equal current ratios criteria ($I_i^{\mathcal{G}_e}/I_i^\text{rated}=I_j^{\mathcal{G}_e}/I_j^\text{rated}$) underlining the proportional current-sharing, studied in the literature (See e.g., \cite{Renke2018,Renke2019,Sahoo2018CFT,Cucuzzella2018,Cucuzzella2019,Trip2018,Trip2019,Silani2020,Trip2018SM,Cucuzzella2019Robust,Nahata2020ZIP,Nahata2020ZIE,Sadabadi2021}). \section{Controller Design, Closed-Loop System Equilibrium, and Stability Analysis} In this section, a distributed controller is proposed for the droop-based MG system to satisfy the control objectives in (6) and (7). The proposed controller relies on both the local and neighborhood measurements of the generators; hence, the generators need to exchange information through a communication network as described next. \subsection{Communication Network Model} A communication network between the generators can be modeled as a undirected graph with generators and communication links being its nodes and edges, respectively. Consider the graph $\mathcal{M}_c=(\mathcal{N}_c,\mathcal{E}_c,\mathcal{A})$, where $\mathcal{N}_c=\{1,\cdots,n^\mathcal{N}_c\}$, $\mathcal{E}_c\subseteq \mathcal{N}_c\times\mathcal{N}_c$, and $\mathcal{A}=[a_{ij}]\in \mathbb{R}^{n^\mathcal{N}_c\times n^\mathcal{N}_c}$ are its node set, edge set, and adjacency matrix, respectively. If nodes $i$ and $j$ exchange data, then they are neighbors, $(j,i) \in \mathcal{E}_c$, and $a_{ij}=a_{ji}>0$; otherwise, nodes $i$ and $j$ are not neighbors, $(j,i) \notin \mathcal{E}_c$, and $a_{ij}=a_{ji}=0$. Let $N_i=\{j|(j,i)\in \mathcal{E}_c\}$ and $d_i=\sum_{j\in N_i} a_{ij}$ be neighbor set and in-degree of node $i$, respectively. The Laplacian matrix of $\mathcal{M}_c$ is then $\mathcal{L}=\mathcal{L}^\top\coloneqq\mathcal{D}-\mathcal{A}$, where $\mathcal{D}=\mathrm{diag}\{d_i\}$\cite{Olfati2007}. \subsection{The Distributed Consensus-Based Control System} The consensus algorithm\cite{Olfati2007} is an effective technique to perform a distributed solution of the KKT condition in optimization problems (the control objective (6))\cite{MolzahnReview}. Accordingly, we choose the distributed consensus-based integral controller \begin{IEEEeqnarray}{rCl} \dot{x}_i^c&=&k_i^I{\sum}_{j\in N_i}a_{ij}(u_j^c-u_i^c),\IEEEyesnumber\IEEEyessubnumber \end{IEEEeqnarray} where $u_i^c$ is the data shared between the DGs; $x_i$ is the controller state; $k_i^I>0$ is the integral gain; $a_{ij}$ is the communication weight between DGs $i$ and $j$, defined in the previous subsection. Let us define $\mathbf{x}_c=\mathrm{col}\{x_i^c\}\in \mathbb{R}^{n_e^\mathcal{G}}$, $\mathbf{u}_c=\mathrm{col}\{u_i^c\}\in \mathbb{R}^{n_e^\mathcal{G}}$, and $\mathbf{k}_I=\mathrm{diag}\{k_i\}\in \mathbb{R}^{n_e^\mathcal{G}\times n_e^\mathcal{G}}$. With the Hamiltonian $H_c(\mathbf{x}_c)=0.5\mathbf{x}_c^\top\mathbf{k}_I^{-1}\mathbf{x}_c$, this controller can then be represented as the PH system below. \begin{IEEEeqnarray}{rCl} \Sigma_c :\begin{cases} \dot{\mathbf{x}}_c=\mathbf{g}_c\mathbf{u}_c\\ \mathbf{y}_c=\mathbf{g}_c^\top \nabla H_c(\mathbf{x}_c) \end{cases},\text{ where } \mathbf{g}_c=-\mathbf{k}_I\mathcal{L}.\IEEEyessubnumber \end{IEEEeqnarray} where $\mathcal{L}$ is the Laplacian matrix of the communication network. Now if $\Sigma_c$ (8b) has a feasible equilibrium point, then the incremental model of this linear system can be written as \begin{IEEEeqnarray}{rCl} \Tilde{\Sigma}_c :\begin{cases} \dot{\tilde{\mathbf{x}}}_c=\mathbf{g}_c\tilde{\mathbf{u}}_c\\ \tilde{\mathbf{y}}_c=\mathbf{g}_c^\top \nabla H_c(\tilde{\mathbf{x}}_c) \end{cases}.\IEEEyessubnumber \end{IEEEeqnarray} Therefore, one can write $\dot{H}_c(\tilde{\mathbf{x}}_c)=\tilde{\mathbf{y}}_c^\top\tilde{\mathbf{u}}_c$. Hence, with the storage function $H_c(\tilde{\mathbf{x}}_c)$, the control system $\Tilde{\Sigma}_c$ (8c) is also passive (lossless) with the input $\tilde{\mathbf{u}}_c$ and output $\tilde{\mathbf{y}}_c$. \subsection{Control by Interconnection of the Incremental Systems} Now that the incremental model of both physical and control systems are represented as PH systems, one can couple them through the following subsystem. \begin{IEEEeqnarray}{c} \Sigma_I :\begin{cases} \begin{bmatrix} \mathbf{u}\\\mathbf{u}_c \end{bmatrix}=\begin{bmatrix} -\mathbf{r} & -\mathbf{w}^{-1}\\(\mathbf{w}^{-1})^\top & \mathbf{0} \end{bmatrix}\begin{bmatrix} \mathbf{y}\\\mathbf{y}_c \end{bmatrix}+\begin{bmatrix} \mathbf{b}\\\mathbf{b}_c \end{bmatrix} \end{cases},\IEEEyesnumber\IEEEyessubnumber \end{IEEEeqnarray} where $\mathbf{b}=\mathrm{col}\{b_i\}$ and $\mathbf{b}_c=\mathrm{col}\{b_i^c\}$ are constant vectors in $\mathbb{R}^{n_e^\mathcal{G}}$; $\mathbf{r}$ and $\mathbf{w}$ are square matrices belonging to $\mathbb{R}^{n_e^\mathcal{G}\times n_e^\mathcal{G}}$. \textit{Assumption 4:} The systems $\Sigma$ (2) and $\Sigma_c$ (8b) have feasible equilibrium points which are coupled through the subsystem $\Sigma_I$ (9a) as follows. \begin{IEEEeqnarray}{c} \begin{cases} \begin{bmatrix} \bar{\mathbf{u}}\\\bar{\mathbf{u}}_c \end{bmatrix}=\begin{bmatrix} -\mathbf{r} & -\mathbf{w}^{-1}\\(\mathbf{w}^{-1})^\top & \mathbf{0} \end{bmatrix}\begin{bmatrix} \bar{\mathbf{y}}\\\bar{\mathbf{y}}_c \end{bmatrix}+\begin{bmatrix} \mathbf{b}\\\mathbf{b}_c \end{bmatrix} \end{cases}.\IEEEyessubnumber \end{IEEEeqnarray} If \textit{Assumption 4} holds, then the incremental model of $\Sigma_I$ (9a) can be written as the following lossy \textit{interconnection subsystem}\cite{OrtegaCBI}. \begin{IEEEeqnarray}{c} \Tilde{\Sigma}_I :\begin{cases} \begin{bmatrix} \tilde{\mathbf{u}}\\\tilde{\mathbf{u}}_c \end{bmatrix}=\begin{bmatrix} -\mathbf{r} & -\mathbf{w}^{-1}\\(\mathbf{w}^{-1})^\top & \mathbf{0} \end{bmatrix}\begin{bmatrix} \tilde{\mathbf{y}}\\\tilde{\mathbf{y}}_c \end{bmatrix} \end{cases}.\IEEEyessubnumber \end{IEEEeqnarray} Therefore, one has \begin{IEEEeqnarray}{c} \tilde{\mathbf{y}}^\top\tilde{\mathbf{u}}+\tilde{\mathbf{y}}_c^\top\tilde{\mathbf{u}}_c=-\tilde{\mathbf{y}}^\top\mathbf{r}\tilde{\mathbf{y}}.\IEEEyessubnumber \end{IEEEeqnarray} The configuration used for control by interconnection\cite{OrtegaCBI} of the systems is shown in Fig. 2. \begin{figure} \centering \begin{circuitikz} \ctikzset{bipoles/length=.7cm} \normalsize \draw[fill=gray!50] (0,2)rectangle(1,3.5)node[midway]{$\Sigma$}; \draw[fill=gray!50] (7,2)rectangle(8,3.5)node[midway]{$\Sigma_c$}; \draw[dashed,fill=none] (2.1,2)rectangle(5.5,3.5);\draw[white,fill=white] (4,2) circle (0.26)node{\scriptsize };\node at (4,2){$\Sigma_I$}; \node at (1.2,3){\scriptsize$+$};\draw (1.5,3.2)to[short,i>_=$\:$](1,3.2);\draw(2,3.2)to[R] (3,3.2);\draw[fill=white] (1.7,3.2) circle (0.3)node{\scriptsize };\node at (1.55,3.2){\scriptsize$+$};\node at (1.85,3.2){\scriptsize$-$};\draw(3,3.2)--(3.4,3.2)--(3.4,3.075);\node[diamond,draw,minimum width =0.65cm,minimum height =0.65cm] at (3.4,2.75){};\node at (3.4,2.85){\scriptsize$-$};\node at (3.4,2.65){\scriptsize$+$};\draw(3.4,2.425)--(3.4,2.3)--(1,2.3);\node at (1.2,2.5){\scriptsize$-$};\node at (1.2,2.75){\scriptsize$\mathbf{u}$};\node at (2.5,2.9){\scriptsize$\mathbf{r}$};\node at (1.7,2.75){\scriptsize$\mathbf{b}$};\node at (3.9,2.45){\scriptsize$\mathbf{w}^{-1}\mathbf{y}_c$};\node at (1.2,3.4){\scriptsize$\mathbf{y}$}; \node at (6.8,3){\scriptsize$-$};\draw (7,3.2)to[short,i>_=$\:$](6.5,3.2);\draw (6.5,3.2)--(6.2,3.2)--(6.2,3.075);\draw (6.2,2.775) circle (0.3)node{\scriptsize };\draw[-latex](6.2,3)--(6.2,2.525);\draw (6.2,2.475)--(6.2,2.3)--(7,2.3);\node at (6.8,2.5){\scriptsize$+$}; \draw (6.2,3.2)--(5,3.2)--(5,3.075);\node[diamond,draw,minimum width =0.65cm,minimum height =0.65cm] at (5,2.75){};\draw[-latex](5,3)--(5,2.525);\draw (5,2.45)--(5,2.3)--(6.2,2.3); \node at (6.8,2.75){\scriptsize$\mathbf{y}_c$};\node at (6.8,3.4){\scriptsize$\mathbf{u}_c$};\node at (5.74,2.75){\scriptsize$\mathbf{b}_c$};\node at (4.35,3.1){\scriptsize$(\mathbf{w}^{-1})^\top\mathbf{y}$}; \draw[fill=gray!15] (0,0)rectangle(1,1.5)node[midway]{$\Tilde{\Sigma}$}; \draw[fill=gray!15] (7,0)rectangle(8,1.5)node[midway]{$\Tilde{\Sigma}_c$}; \draw[dashed,fill=none] (1.5,0)rectangle(6.6,1.5);\draw[white,fill=white] (4,0) circle (0.26)node{\scriptsize };\node at (4,0){$\Tilde{\Sigma}_I$}; \node at (1.2,1){\scriptsize$+$};\draw (1.5,1.2)to[short,i>_=$\:$](1,1.2);\draw(1.4,1.2)to[R] (2.4,1.2);\draw(2.4,1.2)--(3.4,1.2)--(3.4,1.075);\node[diamond,draw,minimum width =0.65cm,minimum height =0.65cm] at (3.4,0.75){};\node at (3.4,0.85){\scriptsize$-$};\node at (3.4,0.65){\scriptsize$+$};\draw(3.4,0.425)--(3.4,0.3)--(1,0.3);\node at (1.2,0.5){\scriptsize$-$};\node at (1.2,0.75){\scriptsize$\Tilde{\mathbf{u}}$};\node at (1.9,0.9){\scriptsize$\mathbf{r}$};\node at (2.65,0.75){\scriptsize$\mathbf{w}^{-1}\Tilde{\mathbf{y}}_c$};\node at (1.2,1.4){\scriptsize$\Tilde{\mathbf{y}}$}; \node at (6.8,1){\scriptsize$-$};\draw (7,1.2)to[short,i>_=$\:$](6.5,1.2);\node at (6.8,0.5){\scriptsize$+$}; \draw (6.5,1.2)--(5,1.2)--(5,1.075);\node[diamond,draw,minimum width =0.65cm,minimum height =0.65cm] at (5,0.75){};\draw[-latex](5,1)--(5,0.525);\draw (5,0.45)--(5,0.3)--(7,0.3); \node at (6.8,0.75){\scriptsize$\Tilde{\mathbf{y}}_c$};\node at (6.8,1.4){\scriptsize$\Tilde{\mathbf{u}}_c$};\node at (5.9,0.75){\scriptsize$(\mathbf{w}^{-1})^\top\Tilde{\mathbf{y}}$}; \node at (1.2,2){\scriptsize (a)};\node at (1.2,0){\scriptsize (b)}; \end{circuitikz} \caption{Block (circuit) diagram of the control by interconnection scheme for both non-incremental (a) and incremental (b) system models.} \end{figure} \textit{Proposition 2:} Consider the PH system $\Sigma$ (2) coupled with the controller $\Sigma_c$ (8b) through the interconnection subsystem $\Sigma_I$ (9a) (See Fig. 2). If \textit{Assumption 4} holds and the matrix $\mathbf{R}_D+\mathbf{r}$ is positive-definite, then the equilibrium point of the closed-loop system is \textit{asymptotically stable} in the region \begin{IEEEeqnarray}{rCl} \mathbb{S}&=&\{\tilde{\mathbf{x}}_t=[\tilde{\mathbf{x}}^\top,\tilde{\mathbf{x}}_c^\top]^\top:\tilde{\mathbf{x}}\in\mathbb{D},\tilde{\mathbf{x}}_c\in\mathbb{R}^{n_e^\mathcal{G}}\}.\IEEEnonumber \end{IEEEeqnarray} \textit{Proof: } Consider the total storage function $H_t(\tilde{\mathbf{x}}_t)=H(\tilde{\mathbf{x}})+H(\tilde{\mathbf{x}}_c)$ for the incremental model of the closed loop-system which has a minimum at the equilibrium point. Taking its derivative and using (3), (8c), and (9d), one has \begin{IEEEeqnarray}{rCl} \dot{H}_t(\tilde{\mathbf{x}}_t)&=&\dot{H}(\tilde{\mathbf{x}})+\dot{H}_c(\tilde{\mathbf{x}}_c)=-(\nabla H(\tilde{\mathbf{x}}))^\top\mathbf{T}(\tilde{\mathbf{x}}_t)\nabla H(\tilde{\mathbf{x}});\qquad\IEEEyesnumber\\ \mathbf{T}(\tilde{\mathbf{x}}_t) &=& \begin{bmatrix} \mathbf{R}_{\mathcal{G}_e}+\mathbf{R}_D+\mathbf{r} &\mathbf{0}&\mathbf{0}\\ \mathbf{0} & \mathbf{R}_{\mathcal{E}_e}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}& \mathbf{G}_\text{cte}-\mathbf{G}_P(\tilde{\mathbf{q}}_{\mathcal{N}_e}) \end{bmatrix}.\IEEEnonumber \end{IEEEeqnarray} According to (4), $[\mathbf{G}_\text{cte}-\mathbf{G}_P(\tilde{\mathbf{q}}_{\mathcal{N}_e})]$ is positive-definite for all $\tilde{\mathbf{x}}_t\in\mathbb{S}$, if the closed-loop system has a feasible equilibrium point (\textit{Assumption 4} holds). Moreover, the matrices $\mathbf{R}_{\mathcal{E}_e}$ and $\mathbf{R}_{\mathcal{G}_e}$ are also positive-definite. Therefore, if $\mathbf{R}_D+\mathbf{r}$ is positive-semi-definite, then $\mathbf{T}(\tilde{\mathbf{x}}_t)$ is positive-definite and $\dot{H}_t\leq0, \forall\tilde{\mathbf{x}}_t\in\mathbb{S}$ which proves that the equilibrium point is \textit{stable} in $\mathbb{S}$\cite{Khalil}, with Lyapunov function $H_t$. On the other hand, positive-definiteness of $H_t$ ensures that $\exists \zeta>0$ such that the level set $\Omega_\zeta=\{\tilde{\mathbf{x}}_t\in\mathbb{S}:H_t\leq\zeta\}$ is bounded. Since its requirements are all satisfied, LaSalle's theorem can be applied\cite{Khalil}. According to LaSalle's theorem, every solution starting in $\Omega_\zeta$ converges to the largest invariant set, say $\mathbb{M}$, in $\mathbb{E}=\{\tilde{\mathbf{x}}_t\in\Omega_\zeta:\dot{H}_t=0\}$; i.e., $\tilde{\mathbf{x}}_t\in\mathbb{M}\subseteq\mathbb{E}$ as $t\rightarrow\infty$. Since $\mathbf{T}(\tilde{\mathbf{x}}_t)$ is positive-definite $\forall\tilde{\mathbf{x}}_t\in\mathbb{S}$ and $H(\tilde{\mathbf{x}})$ is quadratic, according to (10), one can write $\mathbb{E}=\{\tilde{\mathbf{x}}_t\in\Omega_\zeta:\tilde{\mathbf{x}}=\mathbf{0},\nabla H(\tilde{\mathbf{x}})=\mathbf{0}\}$ implying that $\dot{\tilde{\mathbf{x}}}=\mathbf{0}$. Therefore, by using (3), (8c), and (9c) it is easy to observe that the motion in this invariant set is governed by $\dot{\tilde{\mathbf{x}}}_c=0, \forall \mathbf{x}_t\in\mathbb{E}$. In other words, the largest invariant set in $\mathbb{E}$ is the equilibrium point; i.e., $\mathbb{M}=\{\tilde{\mathbf{x}}_t\in\Omega_\zeta:\tilde{\mathbf{x}}=\mathbf{0}, \tilde{\mathbf{x}}_c= \mathbf{0}\}$. Therefore, LaSalle's theorem implies \textit{asymptotic stability} of the equilibrium point in $\mathbb{S}$. \hfill $\blacksquare$ \textit{Corollary 1:} Let all the assumptions and conditions of \textit{Propositions 1 and 2} hold. Then, if $\mathbf{P}_\text{cte}= \mathbf{0}$, i.e., if the constant-power loads do not exist in the system, then the equilibrium point of the closed-loop system is \textit{globally asymptotically stable}. \textit{Proof:} According to (4), since $G_k^\text{cte}\geq 0$ if $P_k^\text{cte}=0$ and all the conditions of \textit{Proposition 1} hold, then one has $\mathbb{D}=\mathbb{R}^{n_e^\mathcal{G}+n_e^\mathcal{E}+n_e^\mathcal{N}}$ and hence $\mathbb{S}=\mathbb{R}^{2n_e^\mathcal{G}+n_e^\mathcal{E}+n_e^\mathcal{N}}$. Moreover, the Lyapunov function $H_t(\tilde{\mathbf{x}}_t)$ is \textit{radially unbounded} as it is in quadratic form. Thus, the equilibrium point is \textit{globally asymptotically stable}, if all the assumptions and conditions of \textit{Proposition 2} hold\cite{Khalil}.\hfill $\blacksquare$ \subsection{Equilibrium (Steady State) Analysis} \textit{Proposition 3:} Let \textit{Assumption 4} hold. Then, if the communication network is connected, the KKT optimality condition in (6) and the near-nominal voltage formation in (7) with $w_i=\alpha_i^{-1}$ are simultaneously achieved at the equilibrium point of the closed-loop system. \textit{Proof:} According to (8b), at equilibrium point one has $\mathcal{L}\bar{\mathbf{u}}_c=\mathbf{0}$, where we used the fact that $\mathbf{k}_I$ is positive-definite. If the communication network is connected, then $\mathcal{L}$ has a simple zero eigenvalue\cite{Olfati2007} and therefore $\bar{\mathbf{u}}_c=u_\text{opt}\mathbf{1}$ is the unique solution of $\mathcal{L}\bar{\mathbf{u}}_c=\mathbf{0}$, where $u_\text{opt}$ is the consensus value. Thus, according to (9b) one can write \begin{IEEEeqnarray}{rCl} (\mathbf{w}^{-1})^\top\bar{\mathbf{y}}+\mathbf{b}_c&=&u_\text{opt}\mathbf{1}.\IEEEyesnumber\IEEEyessubnumber \end{IEEEeqnarray} Let us define $\boldsymbol{\lambda}=\mathrm{col}\{\lambda_i\}\in \mathbb{R}^{n_e^\mathcal{G}}$, $\boldsymbol{\beta}=\mathrm{col}\{\beta_i\}\in \mathbb{R}^{n_e^\mathcal{G}}$ and $\boldsymbol{\alpha}=\mathrm{diag}\{\alpha_i\}\in \mathbb{R}^{n_e^\mathcal{G}\times n_e^\mathcal{G}}$. The KKT condition (6) can then be written as \begin{IEEEeqnarray}{rCl} \bar{\boldsymbol{\lambda}}&=&2\boldsymbol{\alpha}\bar{\mathbf{y}}+\boldsymbol{\beta}=\lambda_\text{opt}\mathbf{1}.\IEEEyessubnumber \end{IEEEeqnarray} Therefore, if $\mathbf{w}^{-1}=2\boldsymbol{\alpha}$ and $\mathbf{b}_c=\boldsymbol{\beta}$, then one has $u_\text{opt}=\lambda_\text{opt}$ and $\bar{\lambda}_i=\lambda_\text{opt}$. This underlines that the KKT condition is satisfied at the equilibrium point. Let us further define $\mathbf{V}=\mathrm{col}\{V_i\}\in\mathbb{R}^{n_e^\mathcal{G}}$. From (1e) and (9b) one can write $\bar{\mathbf{V}}=\mathbf{1}V_\text{nom}-\mathbf{R}_D\bar{\mathbf{y}}+\bar{\mathbf{u}}$ and $\bar{\mathbf{u}}=-\mathbf{r}\bar{\mathbf{y}}-\mathbf{w}^{-1}\bar{\mathbf{y}}_c+\mathbf{b}$, and hence $\bar{\mathbf{V}}=\mathbf{1}V_\text{nom}-(\mathbf{R}_D+\mathbf{r})\bar{\mathbf{y}}-\mathbf{w}^{-1}\bar{\mathbf{y}}_c+\mathbf{b}$. Multiplying this equality by $\mathbf{1}^\top\mathbf{w}$ one has \begin{IEEEeqnarray}{rCl} \mathbf{1}^\top\mathbf{w}\bar{\mathbf{V}}&=&\mathbf{1}^\top\mathbf{w}\mathbf{1}V_\text{nom}-\mathbf{1}^\top\mathbf{w}[(\mathbf{R}_D+\mathbf{r})\bar{\mathbf{y}}-\mathbf{b}]-\mathbf{1}^\top\bar{\mathbf{y}}_c.\qquad\IEEEnonumber \end{IEEEeqnarray} Now if with $k_p\geq 0$ one selects $\mathbf{r}=-\mathbf{R}_D+k_P\mathbf{w}^{-1}\mathcal{L}\mathbf{w}^{-1}$ and $\mathbf{b}=-k_P\mathbf{w}^{-1}\mathcal{L}\boldsymbol{\beta}$, then, using (8b) and the property $\mathbf{1}^\top\mathcal{L}=\mathbf{0}^\top$ of undirected graphs\cite{Olfati2007}, $\mathbf{1}^\top\mathbf{w}\bar{\mathbf{V}}=\mathbf{1}^\top\mathbf{w}\mathbf{1}V_\text{nom}$ can be concluded, which is equivalent to (7) with $w_i=1/(2\alpha_i)$.\hfill $\blacksquare$ \subsection{Implementation of the Proposed Controller} The proposed controller is presented in matrix form so far. However, in what follows, to better understand its practical implementation, it is formulated in a non-matrix format in terms of the required measurements, parameters, and communication data. If $\mathbf{w}=0.5\boldsymbol{\alpha}^{-1}$, $\mathbf{r}=-\mathbf{R}_D+k_P2\boldsymbol{\alpha}\mathcal{L}2\boldsymbol{\alpha}$, $\mathbf{b}_c=\boldsymbol{\beta}$, and $\mathbf{b}=-k_P2\boldsymbol{\alpha}\mathcal{L}\boldsymbol{\beta}$, then considering (2) and defining $\mathbf{z}_\lambda=\mathrm{col}\{z_i^\lambda\}=-\mathcal{L}\boldsymbol{\lambda}$, and $\mathbf{z}_c=\mathrm{col}\{z_i^c\}=-\mathcal{L}\mathbf{x}_c$, the control system (8b) coupled with the interconnection subsystem (9a) can be written as \begin{IEEEeqnarray}{c} \mathbf{u}=\mathbf{R}_D\mathbf{I}_{\mathcal{G}_e}+2\boldsymbol{\alpha}(k_P\mathbf{z}_\lambda-\mathbf{z}_c),\qquad\dot{\mathbf{x}}_c=\mathbf{k}_I\mathbf{z}_\lambda,\IEEEnonumber \end{IEEEeqnarray} which can be written in the following scalar format. \begin{IEEEeqnarray}{c} \begin{cases} u_i=R_i^DI_i^{\mathcal{G}_e}+2\alpha_i(k_Pz_i^\lambda-z_i^c)\\ \dot{x}_i^c=k_i^Iz_i^\lambda\\ z_i^\lambda=\sum_{j\in N_i}a_{ij}(\lambda_j-\lambda_i)\\ z_i^c=\sum_{j\in N_i}a_{ij}(x^c_j-x^c_i)\\ \lambda_i=2\alpha_i I_i^{\mathcal{G}_e}+\beta_i \end{cases}. \end{IEEEeqnarray} Fig. 3 depicts a schematic diagram of the proposed controller described in (12). One can see that except for $x_j^c$ and $\lambda_j$, received from the neighboring DGs, the other parameters and variables are locally available for each DG. \begin{figure} \centering \begin{circuitikz}[american,scale=0.69,bigAmp/.style={amp, bipoles/length=1cm}] \ctikzset{bipoles/length=.69cm} \scriptsize \draw[draw=none,fill=green!5] (-2.2,3.5)rectangle(10.2,9); \draw[draw=none,fill=red!5] (-2.2,3.5)rectangle(1.3,4.2); \draw[-latex] (-1.5,4.7)--(-1.5,3.5)node[near end,left]{$u_i$}; \draw (-1.5,4.8) circle (0.1)node{+}; \draw[-latex] (-1,4.8)--(-1.4,4.8); \draw (0,4.8) to[bigAmp](-1,4.8);\node at (-.35,4.8){$R_i^D$}; \draw[-latex] (.6,7.2)--(0,7.2)node[midway,above]{$z_i^\lambda$};\draw[-latex] (-1,7.2)--(-1.4,7.2); \draw (1,6)--(.6,6); \draw (0,7.2) to[bigAmp](-1,7.2);\node at (-.35,7.2){$k_P$}; \draw[-latex] (-1.5,7.1)--(-1.5,6.5);\draw[-latex] (-1.5,5.5)--(-1.5,4.9); \draw (-1.5,6.5) to[bigAmp](-1.5,5.5);\node at (-1.5,6.2){$2\alpha_i$}; \draw (-1.5,7.2) circle (0.1)node{+}; \draw (1,8.4)--(-1.5,8.4)node[near end,above]{$z_i^c$}; \draw[-latex] (-1.5,8.4)--(-1.5,7.3)node[very near end,left]{$-$}; \draw (1,8.1)rectangle(5,8.7)node[midway]{$\sum_{j\in N_i}a_{ij}(x_j^c-x_i^c)$};\draw[-latex](6.15,8.4)--(5,8.4)node[midway,above]{$x_j^c$}; \draw (1,5.7)rectangle(5,6.3)node[midway]{$\sum_{j\in N_i}a_{ij}(\lambda_j-\lambda_i)$};\draw[-latex](6.15,6)--(5,6)node[midway,above]{$\lambda_j$}; \draw (1.5,7.2) to[bigAmp](2.5,7.2);\node at (1.75,7.2){$k_i^I$}; \draw (0.6,6)--(0.6,7.2);\draw[-latex](0.6,7.2)--(1.5,7.2);\draw[-latex](2.5,7.2)--(3.2,7.2); \draw (3.2,6.8)rectangle(4,7.6)node[midway]{$\int$}; \draw[-latex](4,7.2)--(6.15,7.2)node[near start,below]{$x^c_i$};\draw[-latex](4.5,7.2)--(4.5,8.1); \draw (0.85,3.5)--(0.85,4.8)node[near start,left]{$I_i^{\mathcal{G}_e}$}; \draw[-latex] (0.85,4.8)--(0,4.8); \draw (1.5,4.8) to[bigAmp](2.5,4.8);\node at (1.85,4.8){$2\alpha_i$}; \draw[-latex](0.85,4.8)--(1.5,4.8);\draw[-latex](2.5,4.8)--(3.1,4.8);\draw (3.2,4.8) circle (0.1)node{+};\draw[-latex](3.2,4)--(3.2,4.7)node[very near start,left]{$\beta_i$};\draw[-latex](3.3,4.8)--(6.15,4.8)node[near start,below]{$\lambda_i$};\draw[-latex](4,4.8)--(4,5.7); \draw[dashed,fill=gray!5] (6.15,3.9)rectangle(9.85,8.9); \node at (8,7.4){Neighbor-to-Neighbor};\node at (8,6.4){Inter-DG};\node at (8,5.4){Communication Network}; \end{circuitikz} \caption{Schematic diagram of the proposed distributed controller.} \end{figure} \section{Case Studies and Results} To show the effectiveness of the proposed controller, it is tested on a 48-Volt meshed dc MG, powered by six DGs. The DGs with odd (resp. even) numbers are interfaced to the grid via buck (resp. boost) converters, which are depicted in Fig.~\ref{TestMG} by circles (resp. squares). The electrical and control specifications of the MG shown in Fig.~\ref{TestMG} are given in Table~\ref{TableI}. \begin{figure} \centering \begin{circuitikz}[european,scale=0.8,bigAmp/.style={amp, bipoles/length=0.5cm}] \ctikzset{bipoles/length=.6cm} \scriptsize \draw (0,0)to[R,l=$\mathcal{E}^e_1$,*-*](2,0)to[R,l=$\mathcal{E}^e_2$,-*](4,0)to[R,l=$\mathcal{E}^e_3$,-*](6,0)to[R,l=$\mathcal{E}^e_4$,-*](8,0)to[R,l=$\mathcal{E}^e_5$,-*](10,0); \draw (6,-3)to[R,l=$\mathcal{E}^e_8$,*-*](8,-3); \draw (6,-3)to[R](4,0);\node at (4.65,-1.7){$\mathcal{E}^e_6$};\draw(8,-3) to[R](10,0);\node at (9.35,-1.7){$\mathcal{E}^e_7$}; \draw (0,0)to[short](0,-.25)node[sground]{};\draw (2,0)to[short](2,-.25)node[sground]{};\draw (4,0)to[short](4,-.25)node[sground]{};\draw (6,0)to[short](6,-.25)node[sground]{};\draw (8,0)to[short](8,-.25)node[sground]{};\draw (10,0)to[short](10,-.25)node[sground]{};\draw (6,-3)to[short](6,-3.25)node[sground]{};\draw (8,-3)to[short](8,-3.25)node[sground]{}; \draw (0,1.5)to[R,l=$\mathcal{G}^e_1$](0,0);\draw (2,1.5)to[R,l=$\mathcal{G}^e_2$](2,0);\draw (6,1.5)to[R,l=$\mathcal{G}^e_3$](6,0);\draw (8,1.5)to[R,l=$\mathcal{G}^e_4$](8,0);\draw (6,-1.5)to[R,l=$\mathcal{G}^e_5$](6,-3);\draw (8,-1.5)to[R,a=$\mathcal{G}^e_6$](8,-3); \draw[fill=green!6] (0,1.6) circle (0.35)node{DG1};\draw[fill=green!6] (2-.35,1.6-.35) rectangle (2+.35,1.6+.35)node[midway]{DG2};\draw[fill=green!6] (6,1.6) circle (0.35)node{DG3};\draw[fill=green!6] (8-.35,1.6-.35) rectangle (8+.35,1.6+.35)node[midway]{DG4};\draw[fill=green!6] (6,-3+1.6) circle (0.35)node{DG5};\draw[fill=green!6] (8-.35,-3+1.6-.35) rectangle (8+.35,-3+1.6+.35)node[midway]{DG6}; \node at (0.3,-.2){$\mathcal{N}^e_1$};\node at (2.3,-.2){$\mathcal{N}^e_2$};\node at (6.3,-.2){$\mathcal{N}^e_3$};\node at (8.3,-.2){$\mathcal{N}^e_4$};\node at (6.3,-.2-3){$\mathcal{N}^e_5$};\node at (8.3,-.2-3){$\mathcal{N}^e_6$};\node at (4,.2){$\mathcal{N}^e_7$};\node at (10,.2){$\mathcal{N}^e_8$}; \draw[dashed,latex-latex,blue] (0.35,1.6) parabola (2-.35,1.6); \draw[dashed,latex-latex,blue] (2+.35,1.6)parabola(6-.35,-3+1.6); \draw[dashed,latex-latex,blue] (2+.35,1.6)parabola(6-.35,1.6); \draw[dashed,latex-latex,blue] (6+.35,1.6)parabola(8-.35,1.6); \draw[dashed,latex-latex,blue] (6+.35,-3+1.6)parabola(8-.35,-3+1.6); \draw[dashed,latex-latex,blue] (6-.33,1.5) .. controls (5.1,1.3) .. (6-.3,-3+1.8); \draw[dashed,latex-latex,blue] (8+.35,1.6) .. controls (10,0) .. (8+.35,-3+1.6); \draw[draw=none,fill=red!5] (0,-3.4)rectangle(4.3,-0.9); \draw[fill=green!6] (0.5,-1.2) circle (0.15); \draw[fill=green!6] (0.35,-1.75) rectangle (0.65,-1.45); \draw (0.5,-1.85)node[sground]{}; \draw (0.2,-2.4) to[R](0.8,-2.4); \draw[dashed,latex-latex,blue] (0.1,-2.8) --(0.9,-2.8); \draw (0.1,-3.2) --(0.9,-3.2); \node [text width=3cm] at (3,-2.17) {\fontsize{7pt}{9.5pt}\selectfont Buck-Based DG\\Boost-Based DG\\Capacitor \& Load\\Transmission Line\\Communication Link\\Electric Connection}; \draw[draw=none] (1.1,-1.35) rectangle (4,-1.05); \draw[draw=none] (1.1,-1.75) rectangle (4,-1.45); \draw[draw=none] (1.1,-2.15) rectangle (4,-1.85); \draw[draw=none] (1.1,-2.55) rectangle (4,-2.25); \draw[draw=none] (1.1,-2.95) rectangle (4,-2.65); \draw[draw=none] (1.1,-3.35) rectangle (4,-3.05); \end{circuitikz} \caption{Electrical and communication networks of the test microgrid system.\label{TestMG}} \end{figure} \begin{table} \centering \caption{The electrical and control specifications of the test MG} \label{TableI} \begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline \multicolumn{9}{|c|}{DGs' Specifications with Base RL of ($0.5\Omega$,$50\mu H$)}\\ \hline \multicolumn{3}{|c|}{}&\multicolumn{6}{|c|}{DG Number ($i\in\mathcal{G}_e$)}\\ \cline{4-9} \multicolumn{3}{|c|}{} & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \multicolumn{3}{|c|}{$I^\text{rated}_i(A)$} & 15 & 6 & 12 & 12 & 10 & 8 \\ \hline \multicolumn{3}{|c|}{$R_i^D(V/A)$} & 0.2 & 0.5 & 0.25 & 0.25 & 0.3 & 0.375 \\ \hline \multicolumn{3}{|c|}{$\alpha_i (10^{-1}\$/A^2)$} & 0.8 & 1.9 & 1 & 1.4 & 1.2 & 1.6 \\ \hline \multicolumn{3}{|c|}{$\beta_i (10^{-1}\$/A)$} & 1 & 2.5 & 1.2 & 1.8 & 1.5 & 2.1 \\ \hline \multicolumn{3}{|c|}{$\gamma_i(10^{-1}\$)$} & 2 & 5 & 2 & 4 & 3 & 4 \\ \hline \multicolumn{3}{|c|}{$R_i^{\mathcal{G}_e}$, $L_i^{\mathcal{G}_e}$ (p.u.)} & 0.5 & 0.4 & 0.55 & 0.6 & 0.45 & 0.5\\ \hline \multicolumn{3}{|c|}{$k_i^P$} &\multicolumn{6}{|c|}{$2$} \\ \hline \multicolumn{3}{|c|}{$k_i^I$} &\multicolumn{6}{|c|}{$100$} \\ \hline\hline \multicolumn{9}{|c|}{Line Specifications ($R_i^{\mathcal{E}_e}$, $L_i^{\mathcal{E}_e}$) with Base RL of ($0.5\Omega$,$50\mu H$)}\\ \hline &\multicolumn{8}{|c|}{Line Number ($j\in\mathcal{E}_e$)}\\ \cline{2-9} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline (p.u.) & 1 & 2 & 2 & 1 & 1 & 3 & 1 & 2 \\ \hline\hline \multicolumn{9}{|c|}{Bus Specifications}\\ \hline &\multicolumn{8}{|c|}{Bus Number ($k\in\mathcal{N}_e$)}\\ \cline{2-9} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline $C_k^{\mathcal{N}_e} (F)$ &\multicolumn{8}{|c|}{$22\times10^{-3}$} \\ \hline $1/G^\text{cte}_k(\Omega)$ & 30 & 20 & 20 & 20 & 30 & 20 & 10 & 10 \\ \hline $I^\text{cte}_k (A)$ & 0.5 & 0.6 & 0.4 & 0.5 & 0.45 & 0.5 & 0.45 & 0.4 \\ \hline $P^\text{cte}_k (W)$ &\multicolumn{8}{|c|}{$0.8 G^\text{cte}_k V_n^2$ where $V_n=48 V$} \\ \hline \end{tabular} \end{table} \textit{Remark 2:} According to Assumption 1, to design the secondary controller, the converters are modeled by an equivalent zero-order model as in \eqref{e1e}; thus, the converter dynamics and its internal voltage controller are hidden in Fig.~1 under the dashed blue box. However, in the simulations, Linear Quadratic Regulator (LQR) controller technique is used for the voltage $V_i$ to track its reference $V_i^\text{ref}$\cite{MahdiehDC}. Fig. 5 depicts the converter dynamics and the internal voltage controller. The resistance $R_i$, inductance $L_i$, and capacitance $C_i$ of all the converters are $0.1\Omega$, $2.64mH$, and $2.2mF$, respectively; the input voltage to the converters $V_i^\text{in}$ of the DGs 1 to 6 are 80, 25, 100, 20, 80, 25 $V$, respectively; $I_i$, $\zeta_i$, and $V_i$ are the states of the system, $m_i$ is the duty cycle given to the PWM generator to produce the switching signal $g_i$ with frequency of 5kHz. To design proper feedback gain matrix $\mathbf{K}_i\in\mathbb{R}^{3\times 3}$, the linearized second-order average model of converters augmented with a voltage-tracker integrator, is used where the output current of the converter capacitor $I_i^{\mathcal{G}_e}$ is considered as an external disturbance, along the lines of\cite{MahdiehDC}. \begin{figure} \centering \begin{circuitikz}[american,scale=0.69] \ctikzset{bipoles/length=.5cm} \scriptsize \draw (.2,1.4)to[open, v=$\:$] (0.2,0.1);\node at(0.1,0.75){$V_i^\text{in}$}; \draw (-.1,1.5)to[short, i=$\:$](0.4,1.5);\draw(0.4,1.5)--(0.8,1.8);\draw (0.8,1.5)--(1.3,1.5);\draw(1.3,0)to[D=$\:$](1.3,1.5); \draw (1.3,1.5)to[short, i=$I_i$](1.8,1.5);\draw (1.8,1.5)to[R=$R_i$](2.4,1.5)--(2.5,1.5);\draw (2.5,1.5)to[L=$L_i$](3.1,1.5)--(3.6,1.5)to[C=$V_i$] (3.6,0);\node at(3.1,0.75){$C_i$};\draw (4.3,1.4)to[open, v=$\:$] (4.3,0.1);\draw(3.6,1.5)to[short,i=$I_i^{\mathcal{G}_e}$](4.3,1.5);\node at(0.6,1.3){$g_i$}; \draw (-.1,0)--(4.3,0);\node at(-.45,0){(a)}; \end{circuitikz} \begin{circuitikz}[american,scale=0.69] \ctikzset{bipoles/length=.5cm} \scriptsize \draw (.3,1.4)to[open, v=$\:$] (0.3,0.1);\node at(0.2,0.75){$V_i^\text{in}$}; \draw (0,1.5)to[short, i=$I_i$](0.5,1.5)to[R=$R_i$](1.1,1.5)--(1.2,1.5)to[L=$L_i$](1.8,1.5)--(2.3,1.5); \draw (2.3,1.5)to[short,i=$\:$](2.3,.95)--(2,0.55);\draw (2.3,0.55)--(2.3,0); \draw(2.3,1.5)to[D=$\:$](3.3,1.5)to[C=$V_i$] (3.3,0);\node at(2.8,0.75){$C_i$};\draw (4,1.4)to[open, v=$\:$] (4,0.1);\draw(3.3,1.5)to[short,i=$I_i^{\mathcal{G}_e}$](4,1.5);; \draw (0,0)--(4,0);\node at(-.35,0){(b)};\node at(1.85,.75){$g_i$}; \end{circuitikz}\\ \begin{circuitikz}[european,scale=0.8,bigAmp/.style={amp, bipoles/length=0.8cm}] \ctikzset{bipoles/length=.5cm} \scriptsize \draw (0.2,0) to[bigAmp](1.4,0);\node at (0.685,0){$\mathbf{K}_i$};\draw[-latex](1.15,0)--(1.8,0)node[near start,above]{$m_i$};\draw (1.8,-.25)rectangle(2.6,0.25);\node at (2.2,0){PWM};\draw[-latex](2.6,0)--(3,0)node[near end,above]{$g_i$}; \draw[fill=black] (-.5,-.5)rectangle(-.4,.5);\draw[line width=0.4mm,-latex] (-.4,0)--(0.45,0);\draw(-.1,-.1)--(.1,.1);\draw(-.15,-.1)--(.05,.1);\draw(-.05,-.1)--(.15,.1); \draw[-latex](-3.5,-.4)--(-.5,-.4);\draw[-latex](-1.5,0)--(-.5,0);\draw[-latex](-3.5,.4)--(-.5,.4);\draw (-2,-.25)rectangle(-1.5,.25);\node at (-1.75,0){$\int$}; \draw (-2.5,0) circle (0.1)node{+};\draw[-latex](-2.4,0)--(-2,0);\draw[-latex](-2.5,-.4)--(-2.5,-.1)node[midway,left]{$-$}; \draw[-latex](-3.5,0)--(-2.6,0);\node at (-3.8,0){$V_i^\text{ref}$};\node at (-3.8,-.4){$V_i$};\node at (-3.8,.4){$I_i$};\node at (-1.3,0.2){$\zeta_i$};\node at (-4.5,-.4){(c)};\node at (3.8,0.15){Switching};\node at (3.8,-.15){Signal}; \end{circuitikz} \caption{Converter circuit dynamics and internal controller; (a) buck converter, (b) boost converter, and (c) LQR-based voltage controller.} \end{figure} \subsection{Controller Performance: Activation and Load Change} \begin{figure*} \centering \includegraphics[width=\textwidth]{V.png} \includegraphics[width=\textwidth]{I.png} \includegraphics[width=\textwidth]{AVV.png} \caption{Simulation results; (a) the DGs' voltages, (b) the DGs' incremental costs (Lagrange multipliers), and (c) weighted average of the DGs' voltages.\label{Simulation1}} \end{figure*} Fig.~\ref{Simulation1} depicts the performance of the MG under the proposed controller in different stages. Before $t=5s$, the MG is operated without the proposed secondary control. Therefore, the DGs voltages are settled away from their nominal voltages so that their average value is deviated from the nominal value $48V$. Moreover, the incremental costs of the DGs have different values which underlines the KKT condition is not satisfied. After activating the controller at $t=5s$, the DGs reach a consensus on their incremental costs and at the same time they form their voltages around the nominal value with a weighted average of nominal voltage. It should be noted that before $t=14s$, only constant impedance and constant current loads are energized. To emphasis the resiliency of the controller, at $t=14s$, the constant power loads at all the buses are activated. One can see that the DGs reach an agreement on a new optimal incremental cost higher than the previous one, which returns to the previous value after deactivating the constant power loads at $t=19s$. It should be emphasized that, over the load change transitions, the average voltage remains unchanged and only transient voltage drifts from the nominal voltage are observed. \subsection{Controller Performance: Plug-and-Play Ability} To show the DGs plug-and-play ability under the proposed controller, the 4th DG is disconnected from the grid at $t=24s$ and it is connected back to the grid at $t=29s$. To do so, a corresponding circuit breaker is opened at $t=24s$ to disconnect the DG physically and the communication links related to the DG are all interrupted. Moreover, before closing the breaker at $t=29s$, all the communication links are restored and both sides of the breaker are voltage-synchronized for seamless connection of the DG. According to Fig. \ref{Simulation1}, after disconnecting 4th DG from the grid, other DGs inject more current so they reach consensus on a new optimal incremental cost. Furthermore, one can see that the average voltage of the remaining five DGs still operate at the nominal value while the fourth DG voltage drops to the voltage of the bus number 4. It is also shown that after connecting it back to the grid, the DG immediately participates in the current sharing and voltage formation tasks as before. \subsection{Real-Time Results From OPAL-RT} To verify the real-time effectiveness of the proposed controller, the previous system is built and loaded to an OPAL-RT OP5600 real-time simulator, shown in Fig.~\ref{Setup}. It should be pointed out that, therein, the detailed switching model of the Buck and Boost converters with switching frequency of 5kHz are employed. The selected IGBTs and Diodes have internal resistance of $1m\Omega$ and forward voltage of $0.8V$. The other (passive) components of the converters and their inner voltage controllers are exactly the same as described in the preamble of this Section (Remark 2). Fig.~\ref{RTResults} indicates alignment of the real-time system responses with the simulation results in Section IV-A. Due to the input limitation of the oscilloscope only the results for the DGs 1, 2, 5, and 6 are given. After activating the controller, the incremental costs reach a consensus and the voltages reach a formation around the nominal value so that their weighted average settles at the nominal value. The results for the load increase scenario further approves the effectiveness of the proposed control in reaching the current-sharing and voltage-formation control goals, under severe load changes. \begin{figure} \centering \includegraphics[width=\columnwidth]{Setup.pdf} \caption{Real-time simulation setup.\label{Setup}} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{VRT.png} \includegraphics[width=\columnwidth]{IRT.png} \includegraphics[width=\columnwidth]{AVRT.png} \caption{Real-time results; (a) the DGs' voltages, (b) the DGs' incremental costs (Lagrange multipliers), and (c) weighted average of the DGs' voltages.\label{RTResults}} \end{figure} \section{Conclusions} A distributed secondary control technique is proposed for dc MGs with ZIP loads which drives the MG to a point where the KKT optimality condition is satisfied for all the DGs and their weighted average voltage is the nominal value. The closed-loop system (the MG engaged with the proposed controller) is formulated in a port-Hamiltonian representation which is shown to be asymptotically stable by using Lyapunov and LaSalle theorems. It is also shown that the system is globally asymptotically stable without the constant power loads. The effectiveness of the proposed controller for different case studies is verified by adapting it to a test system through both non-real-time and real-time simulations. It should be noted that for the theoretical analyses each DG is modeled by an equivalent zero order model as a controllable voltage source, while, in MATLAB/Simulink simulation and OPAL-RT model the average model and detailed switching model are used, respectively. All in all, the theoretical analyses and case studies demonstrate effectiveness of the proposed controller in achieving the desired control goals. \bibliographystyle{IEEEtran}
1804.10305
\section{Introduction} The Heisenberg group $\mathbb{H}^n$ plays a fundamental role in quantum mechanics \cite{Folland} and signal processing \cite{Grochenig}. Recall that the matrix $ \mathcal{J} = \left[ {\begin{smallmatrix} 0 & -I_n \\ I_n & 0 \end{smallmatrix} } \right ] $ determines a \emph{symplectic form} on $\mathbb{R}^{2n}$ by \begin{equation}\label{eq:sympl} \qquad\qquad\qquad \llbracket w , \tilde w \rrbracket = \trp{w} \mathcal{J} \tilde w \qquad\qquad (w , \tilde w \in \mathbb{R}^{2n}). \end{equation} The \emph{Heisenberg group} is the set \[ \mathbb{H}^n = \bigl\{ ( w ,z) \ : \ w \in \mathbb{R}^{2n},\ z \in \mathbb{R} \, \bigr \} \] endowed with the topology of $\mathbb{R}^{2n+1}$ and the group operation \[ ( w ,z)( \tilde w ,\tilde z) = \bigl ( w + \tilde w, z+ \tilde z + \frac{1}{2} \llbracket w , \tilde w \rrbracket \bigr ) . \] The \emph{symplectic group} $ Sp(n, \mathbb{R}) $ acts naturally on $\mathbb{H}^n$ by automorphisms. Recall here that the symplectic group is the set of all invertible matrices preserving the symplectic form (\ref{eq:sympl}), \[ Sp(n, \mathbb{R}) = \left \{ \, \mathcal{A} \in GL_{2n}(\mathbb{R}) \ \big | \ \llbracket {\mathcal{A} w ,\mathcal{A} \tilde w } \rrbracket = \llbracket w , \tilde w \rrbracket \qquad \forall \ w, \tilde w \in {\mathbb{R}}^{2n} \ \right \}, \] and each of its elements defines an automorphism $\alpha_{\mathcal{A}}$ of $\mathbb{H}^n$ by \[ \alpha_{\mathcal{A}}(w,z) = ( \mathcal{A}w,z) \] fixing the elements of the center $Z=\{(0,z):z \in \mathbb{R} \}$ of $\mathbb{H}^n$. Our interest centers around a different family of automorphisms. Separating the phase space $W=\{(w,0): w \in \mathbb{R}^{2n} \}$ into its two components $X= \{ ((x,0),0 ) : x \in \mathbb{R}^n \}$ and $Y = \{ ((0,y),0 ) : y \in \mathbb{R}^n \}$, we consider automorphisms which leave all three subgroups $X$, $Y$ and $Z$ invariant, without fixing elements of the center $Z$. For this purpose, it will be more convenient to work with the \emph{polarized Heisenberg group} $\mathbb{H}_{pol}^n $, \[ \mathbb{H}_{pol}^n = \bigl\{ \, ( x , y,z) \ : \ x, y \in \mathbb{R}^n,\, z \in \mathbb{R} \, \bigr \} \] which has the group operation \[ ( x , y ,z)( \tilde x,\tilde y,\tilde z) = \left ( x + \tilde x, y + \tilde y, z+\tilde z + \trp{y} \tilde x \right ) \] and the simple representation as a matrix group \[ \mathbb{H}_{pol}^n = \left \{ h( x , y, z) \, = \, \begin{bmatrix} 1 & \trp{y} & z \\ 0 & I_n & x \\ 0 & 0 & 1 \end{bmatrix}\ : \ x, y \in \mathbb{R}^n, \ z \in \mathbb{R} \right \} \subset GL_{n + 2} (\mathbb{R}). \] The two Heisenberg groups are isomorphic via the map \[ \qquad\qquad \Psi : ( w,z) \in \mathbb{H}^n \mapsto h( x , y,z+\textstyle \frac{1}{2} \trp{y} x) \in \mathbb{H}_{pol}^n \qquad \left ( w = \left [ \begin{smallmatrix} x\\ y \end{smallmatrix} \right ], \ x,y \in \mathbb{R}^n \right ). \] Now consider the closed subgroup of $GL_{n+2}(\mathbb{R})$ of the form \[ D_0 = \left \{ \ d(a,A,c) : =\left[ {\begin{array}{*{20}c} a & 0 & 0 \\ 0 & {A} & 0 \\ 0 & 0 & c \\ \end{array} } \right] \ : a,c \in \mathbb{R} \backslash \{0\}, \ A \in GL_n(\mathbb{R}) \, \right \}. \] There is a natural action $\alpha$ of $D_0$ on $\mathbb{H}_{pol}^n$ by conjugation, \begin{equation}\label{equa:01} \qquad \alpha_d \left ( \, h( x, y ,z) \, \right ) = d(a,A,c) \, h( x, y ,z) \, d(a,A,c)^{-1} \qquad (d=d(a,A,c) \in D_0). \end{equation} Given a closed subgroup $D$ of $D_0$, one thus obtains a semi-direct product $ \mathbb{H}^{n}_{pol} \rtimes D $ which can be represented as a matrix group \begin{equation}\label{eq:01} \mathbb{H}^{n}_{pol} \rtimes D \cong \left\{ \, h(x, y ,z)d(a,A,c) \,:\, h(x, y, z) \in \mathbb H_{pol}^n, \, d(a,A, c) \in D \, \right\} \subset GL_{n+2}(\mathbb{R}). \end{equation} Observe that replacing $d(a,A,c)$ with $d(c^{-1}a, c^{-1}A,1)$ results in the same automorphism and hence an isomorphic semi-direct product, so one need only consider groups $D$ with $c=1$. When $D=\{ \, d(1,A,1): A \in GL_n(\mathbb{R}) \, \} \cong GL_n(\mathbb{R})$, this semi-direct product is called the \emph{affine-Weyl-Heisenberg group}. It has been extensively studied in \cite{Ali, Hogan, Kalisa, Torresani}. In \cite{Schulz}, the groups (\ref{eq:01}) were studied in case $n=1$ (so that $A$ is a scalar) and $a,A \in \mathbb{R}^{+},$ $c=1$, and were classified up to isomorphism. It was further noticed that they are isomorphic to affine semi-direct products, \begin{equation}\label{equa:03} \mathbb{H}^{1}_{pol} \rtimes D \cong \mathbb R^{2} \rtimes H \end{equation} where $H$ is a closed subgroup of $GL_2(\mathbb{R})$, and $\mathbb R^{2}$ and $H$ are identified with the groups of matrices, \[ \mathbb R^{2} \cong \left\{ \, h(x,0,z) \; : \; x , z \in \mathbb{R} \, \right\} \] and \[ H \cong \left\{ \, h(0,y,0)d(a,A,1) \,:\, a,A \in \mathbb{R}^+ \, \right \}. \] Thus, the groups are isomorphic to subgroups of the affine group and have a wavelet representation. In \cite{Cordero} and \cite{Czaja}, two subgroups of the symplectic group $Sp(n+1,\mathbb{R})$, denoted $(CDS)_{n+1}$ and $(TDS)_{n+1}$ were shown to be isomorphic to subgroups of the affine group, and it was shown that their metaplectic representations and wavelet representations have equivalent subrepresentations. In \cite{Namngam}, it was demonstrated that these two groups fall into the class of groups of the form (\ref{eq:01}), where $D$ is a one-parameter group \[ D= D_{p, B}:= \{ \, d( e^{pt} ,e^{Bt},1) : t \in \mathbb{R} \, \} \] for some fixed $p \in \mathbb{R}$ and $B \in M_{n}(\mathbb{R})$, the latter not similar to a skew-symmetric matrix in case $p=0$. Furthermore, all groups of form $ \mathbb{H}_{pol}^n \rtimes D_{p,B}$ were classified up to isomorphism, and were shown to be isomorphic to subgroups of both, the symplectic group $Sp(n+1,\mathbb{R})$ as well as the affine group $\textit{Aff}(n+1,\mathbb{R})$. In addition, it was shown that the metaplectic representation splits into two subrepresentations, each of which is equivalent to a subrepresentation of the wavelet representation. In this paper, we continue the study of semi-direct products of type $ \mathbb{H}_{pol}^n \rtimes D_{p,B}$, where $D_{p,B}$ is now a two-parameter group. After having introduced these groups, we classify them up to isomorphism by analyzing their Lie algebras. We then proceed at showing that they are isomorphic to subgroups of $Sp(n+1,\mathbb{R})$ as well as $\textit{Aff}(n+1,\mathbb{R})$, and study their metaplectic and wavelet representations. \section{Extensions of the Heisenberg Group} \subsection{The groups $G_{p,B}$} For given fixed numbers $p_1, p_2 \in \mathbb{R}$ and fixed commuting matrices $B_1, B_2 \in M_n(\mathbb{R})$, let us set \[ p:=(p_1, p_2) \qquad \text{and} \qquad B:=(B_1, B_2). \] We also set \[ D_{p,B} = \left\{ {d(t): =\left[ {\begin{array}{*{20}c} e^{pt} & 0 & 0 \\ 0 & e^{Bt} & 0 \\ 0 & 0 & 1 \\ \end{array} } \right] \ :\ t \in \mathbb{R}^2} \right\} \] where $pt$ and $Bt$ denote ''scalar'' products, \[ \qquad pt = p_{1}t_{1}+p_{2}t_{2},\;\; Bt=B_{1}t_{1}+B_{2}t_{2} \quad \qquad (t=\trp{(t_1,t_2)} \in \mathbb{R}^2 ). \] Then $D_{p,B}$ is an abelian (not necessarily closed) subgroup of $ GL_{n+2}(\mathbb{R})$. Conjugation by elements of $D_{p,B}$ naturally defines a continuous action $\alpha$ of $\mathbb{R}^2$ on $\mathbb{H}^{n}_{pol}$ by \begin{equation}\label{eq:5} \alpha_{t} \bigl ( \, h(x,y,z) \, \bigr ) := d(t)\;h(x,y,z)\; d(t)^{-1} = h \bigl (e^{Bt} x, e^{pt} [e^{-Bt}]^{T} y,e^{pt} z \bigr ) . \end{equation} We can thus form the semidirect product \[ G_{p,B} := \mathbb{H}_{pol}^{n} \rtimes_{\alpha} \mathbb{R}^{2}. \] The group operation is given by \begin{align*} \left ( h(x,y,z) ,t \right ) \left ( h(\tilde x,\tilde y,\tilde z) , \tilde t \right ) &= \left ( h (x,y,z ) \, \alpha_t \left ( h(\tilde x,\tilde y,\tilde z) \right ), t+\tilde t \right ) \\ &= \left ( h (x+ e^{Bt} \tilde x,y+ e^{pt} [e^{-Bt}]^{T} \tilde y,z +e^{pt} \tilde z +y^{T} e^{Bt} x ) , t+\tilde t \right ) . \end{align*} Alternatively, we may represent elements of $G_{p,B}$ as quadruples $g(t,x,y,z)$, in this case the group operation becomes \begin{equation}\label{eq:4} g(t, x, y,z) g(\tilde t, \tilde x, \tilde y, \tilde z) = g(t + \tilde t, x+ e^{Bt} \tilde x,y+ e^{pt} [e^{-Bt}]^{T} \tilde y,z + e^{pt} \tilde z + y^{T}e^{Bt} \tilde{x}). \end{equation} \subsection{The groups $G_{p,B}$ as closed subgroups of $GL_{n+2}(\mathbb{R})$} Observe that each group $G_{p,B}$ is isomorphic and homeomorphic to the matrix group \[ G_{p,B} \simeq \left \{ \, \tilde g(t,x,y,z)= \begin{bmatrix} \left [ \begin{smallmatrix} e^{pt} & y^{T} e^{Bt} & z \\ 0 & e^{Bt} & x \\[4pt] 0 & 0 & 1 \end{smallmatrix} \right ] & 0\\ 0 & \left [ \begin{smallmatrix} e^{t_1} & 0 \\ 0 & e^{t_2} \end{smallmatrix} \right] \end{bmatrix} \, : \, \begin{array}{l} t = \trp{(t_1,t_2)} \in \mathbb{R}^2 \\ x,y \in \mathbb{R}^n\\ z\in \mathbb{R} \end{array} \right \} \subset GL_{n+4}(\mathbb{R}). \] This representation of $G_{p,B}$ is not very useful. Under some mild assumptions on the matrices $B_1$ and $B_2$ it is, however, possible to identify $D_{p,B}$ with $\mathbb{R}^2$, and hence $G_{p,B}$ with a closed subgroup of $GL_{n+2}(\mathbb{R})$. The main ingredient is the proof of Lemma 11 in \cite{Bruna} which shows the following: \begin{lem}\label{rem:2} Let $M_1$ and $M_2$ be commuting $d \times d$ matrices, and suppose that \begin{enumerate} \setlength{\parskip}{0pt} \setlength{\itemsep}{1pt} \item[(M1)] $M_1, M_2$ are linearly independent, and \item[(M2)] no nonzero element of $ V_M:=\text{span}(M_1, M_2)$ is similar to a skew-symmetric matrix. \end{enumerate} Then the exponential map $exp: M \mapsto e^M$ is an isomorphism and homeomorphism of the additive group $V_M$ onto a closed subgroup of $GL_d(\mathbb{R})$. \end{lem} Let us now set \[ M_k= \begin{bmatrix} p_k & 0 & 0 \\ 0 & B_k & 0 \\ 0 & 0 & 0 \end{bmatrix}, \qquad (k=1,2), \] and assume from here onwards that the matrices $M_1$ and $M_2$ satisfy the conditions (M1)--(M2). (This certainly is the case when $p_1=1$, $p_2=0$ and $B_2$ is not similar to a skew-symmetric matrix. When $p_1=p_2=0$, this is the case if and only if $B_1$ and $B_2$ satisfy (M1)--(M2).) Applying Lemma \ref{rem:2} we immediately obtain that the map $t \mapsto d(t)$ is an isomorphism and homeomorphism of $\mathbb{R}^2$ onto $D_{p,B}$ and that $D_{p,B}$ is closed in $GL_{n+2}(\mathbb{R})$, and hence by (\ref{eq:5}), \begin{equation}\label{eq:8} G_{p,B} \simeq \left \{ \ g(t,x,y,z)= \begin{bmatrix} e^{pt} & y^{T}e^{Bt} & z \\ 0 & e^{Bt} & x \\ 0 & 0 & 1 \end{bmatrix} \ : \ \begin{array}{l} t \in \mathbb{R}^2 \\ x,y \in \mathbb{R}^n \\ z\in \mathbb{R} \end{array} \ \right \} \subset GL_{n+2}(\mathbb{R}). \end{equation} \section{Classification of the Groups $G_{p,B}$} Observe that under assumptions (M1) and (M2), each group $G_{p,B}$ is a connected, simply connected Lie group. Thus, two groups $G_{p,B}$ and $G_{\tilde p,\tilde B}$ will be isomorphic if and only their Lie algebras $\mathfrak{g}_{p,B}$ and $\mathfrak{g}_{\tilde p, \tilde B}$ are isomorphic. \subsection{The Lie algebras $\mathfrak{g}_{p,B}$} Standard computations show that each Lie algebra $\mathfrak{g}_{p,B}$ is of dimension $2n+3$ and isomorphic to the matrix subalgebra of $M_{n+2}(\mathbb{R})$ of the form \[ \mathfrak{g}_{p,B} = V_{M} \oplus V_H = \underbrace{V_{M_1} \oplus V_{M_2}}_{V_M} \oplus \underbrace{ \overbrace{V_X \oplus V_Y}^{V_W} \oplus V_Z}_{V_H}, \] where $V_M$ is a $2-$dimensional abelian subalgebra, $V_H$ is the $2n+1-$dimensional Heisenberg algebra, and \[ V_{M_1} = \{ s M_1: s \in \mathbb{R} \} , \qquad\qquad V_{M_2} = \{ t M_2 : t \in \mathbb{R} \}, \] \[ V_X = \{ X_x : x \in \mathbb{R}^n \}, \quad\qquad V_Y = \{ Y_y : y \in \mathbb{R}^n \}, \quad\qquad V_Z = \{ Z_z : z \in \mathbb{R} \} \] with \[ X_x = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & x \\ 0 & 0 & 0 \end{bmatrix}, \qquad Y_y = \begin{bmatrix} 0 & y^{T} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \qquad Z_z = \begin{bmatrix} 0 & 0 & z \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}. \] The only possibly nonzero Lie brackets are determined by \begin{equation}\label{eq:9} [M_k,X_x]=X_{B_k x}, \quad [M_k,Y_y]=Y_{(p_kI-B_k^{T}) y}, \quad [M_k,Z_z]=Z_{p_k z}, \quad [Y_y,X_x] = Z_{y^{T} x}, \end{equation} $k=1,2$. For the purpose of classifying this type of Lie algebras we do not require condition (M2). Observe that these Lie algebras are solvable, and that $V_H$ is an ideal of the nilradical. It will be convenient to denote elements $X_x + Y_y$ of $V_W$ by $W_w$, where $w= \left [ \begin{smallmatrix} x \\ y \end{smallmatrix} \right ] $. In this notation, some of the Lie brackets in (\ref{eq:9}) become \begin{equation*} [W_w,W_{\tilde w}] = [ X_x + Y_y, X_{\tilde x}+ Y_{\tilde y} ] = Z_{y^{T} \tilde x + \tilde y^T x} = Z_{ \llbracket w, \tilde w \rrbracket }, \end{equation*} and also \begin{equation}\label{eq:9a} [M_k,W_w] =[M_k,X_x] + [M_k,Y_y] = X_{B_k x} + Y_{(p_k I_n-B_k^{T}) y} = W_{C_kw} \end{equation} with \begin{equation}\label{eq:10} C_k= \begin{bmatrix} B_k & 0 \\ 0 & p_k I_n -B_k^{T} \end{bmatrix} \in M_{2n}(\mathbb{R}). \end{equation} \bigskip The following lemma is probably well known. \begin{lem}\label{lem1} Let a triple $(\lambda,u,S)$ be given, where $\lambda >0$, $u \in \mathbb{R}^{2n}$, and $S \in GL_{2n}(\mathbb{R})$ satisfies $S^{T} \mathcal{J} S= \pm \mathcal{J}$. Then \begin{equation}\label{eq:10c} \qquad\qquad\qquad \Phi( W_w ) = W_{\lambda Sw} + Z_{u^{T} w} \quad \text{and} \quad \Phi( Z_z ) = Z_{ \pm \lambda^2 z} \qquad\qquad (W_w \in V_W, \, Z_z \in V_Z) \end{equation} defines an automorphism of the Heisenberg algebra $V_H$. Conversely, every automorphism of $V_H$ is of this form. \end{lem} \begin{proof} It is clear that the linear map $\Phi$ defined by (\ref{eq:10c}) constitutes a linear automorphism of $V_H$. On the other hand, by assumption on $S$ we have for all $w, \tilde w \in \mathbb{R}^{2n}$, \begin{equation}\label{eq:10d} \begin{split} \left [\Phi( W_{w}), \Phi(W_{\tilde w}) \right ] & = \left [ W_{\lambda Sw} + Z_{u^{T} w }, W_{\lambda S \tilde w} + Z_{u^{T} \tilde w} \right ] = Z_{\llbracket \lambda Sw, \lambda S \tilde w \rrbracket } \\ & = Z_{ \pm \lambda^2 \llbracket w, \tilde w \rrbracket } = \Phi \left ( Z_{\llbracket w, \tilde w \rrbracket } \right ) = \Phi \left ( \left [ W_{w}, W_{\tilde w} \right ] \right ), \end{split} \end{equation} and it follows that $\Phi$ preserves the Lie brackets. Conversely, let $\Phi$ be a Lie algebra automorphism of $V_H$. In light of the decomposition $V_H = V_W \oplus V_Z$ and since $\Phi$ leaves the center $V_Z$ invariant, $\Phi$ has a matrix representation \[ \Phi \leftrightarrow \begin{bmatrix} a_{11} & 0 \\ a_{21} & a_{22 } \end{bmatrix} \] where $a_{11} \in GL_{2n}(\mathbb{R})$ and $a_{22} \neq 0$. Computing as in (\ref{eq:10d}) we have for all $w,\tilde w \in \mathbb{R}^{2n}$, \begin{equation* \begin{split} Z_{ a_{22} \llbracket w, \tilde w \rrbracket } &= \Phi \left ( Z_{\llbracket w, \tilde w \rrbracket } \right ) = \Phi \left ( \, \left [ W_{w}, W_{\tilde w} \right ] \, \right ) = \left [\Phi( W_{w}), \Phi(W_{\tilde w}) \right ] \\ & = \left [ W_{ a_{11}w } + Z_{ a_{21} w }, W_{a_{11}\tilde w} + Z_{a_{21} \tilde w} \right ] = Z_{\llbracket a_{11}w, a_{11} \tilde w \rrbracket } \\ \end{split} \end{equation*} Set $\lambda = \sqrt{ |a_{22}| }$, $S = \frac{1}{\lambda} a_{11}$ and $u =a_{21}^{T}$. Then $ \llbracket Sw, S \tilde w \rrbracket = \text{sgn}(a_{22}) \llbracket w, \tilde w \rrbracket $, that is $ S^{T} \mathcal J S = \text{sgn}(a_{22}) \mathcal{J} $, and the assertion follows. \end{proof} \subsection{Classification of the the Lie algebras $\mathfrak{g}_{p,B}$} Let us first introduce some normalization to the class of Lie algebras $\mathfrak{g}_{p,B}$. Given two algebras $\mathfrak{g}_{p,B}$ and $\mathfrak{g}_{\tilde p, \tilde B}$, their Heisenberg parts are identical, so we will use the same symbol $V_H$ to denote the two, and the remaining component spaces will be denoted by $V_M$ and $V_{\tilde M}$, respectively. \begin{thm}\label{thm:1} If any of the following properties hold, then two Lie algebras $\mathfrak{g}_{p,B}$ and $\mathfrak{g}_{\tilde p, \tilde B}$ are isomorphic: \begin{enumerate} \item $\tilde p = p$ and there exists $S \in Sp(n,\mathbb{R})$ so that \[ \tilde C_k= S C_k S^{-1} \qquad \qquad (k=1, 2) , \] with $C_k$ and $\tilde C_k$ given as in (\ref{eq:10}). \item $\tilde p = p$ and there exists $V \in GL_{n}(\mathbb{R})$ so that \[ \tilde B_k= V B_k V^{-1} \qquad \qquad (k=1, 2) . \] \item Each $\tilde M_i$ is a linear combination of $M_1$ and $M_2$, \[ \tilde M_i = a_{i1} M_1 + a_{i2} M_2 \qquad \qquad (i=1, 2) \] with $\det(A) \neq 0$ where $A= \left[ \begin{smallmatrix} a_{11} & a_{22} \\ a_{21} & a_{22} \end{smallmatrix} \right ] $. \item There exists $\alpha \neq 0$ so that $\tilde M_k = \alpha M_k$ for $k=1, 2$. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item Define a linear isomorphism $\Phi : \mathfrak{g}_{p, B} \to \mathfrak{g}_{\tilde p, \tilde B}$ by \[ \Phi(M_k) = \tilde M_k, \qquad \Phi(W_w) = W_{Sw}, \qquad \Phi(Z_z) = Z_z . \] In light of Lemma \ref{lem1} one only needs to verify that Lie brackets involving the matrices $M_k$ are preserved. This is indeed the case, as by (\ref{eq:9a}), for all $k=1,2$, \[ \qquad\quad \left [ \Phi(M_k), \Phi( W_w) \right ] = \ [ \tilde M_k, W_{Sw} ] = W_{\tilde C_k Sw} = W_{ S C_k w} = \Phi \left (W_{C_k w} \right ) = \Phi \left ( [M_k,W_w] \right ) \] and \[ \left [ \Phi(M_k), \Phi( Z_z) \right ] = \ [ \tilde M_k, Z_z ] = Z_{\tilde p_kz} = Z_{p_kz} = \Phi \left ( [M_k,Z_z] \right ). \] \item Simply apply the above to \[ S = \begin{bmatrix} V & 0 \\ 0 & \left (V^{-1}\right )^{T} \end{bmatrix} . \] \item This is merely a change of basis of the subalgebra $V_M$, and hence both Lie algebras coincide. \item This is a particular change of basis, choosing $a_{ik} = \alpha \delta_{i,k}$. \end{enumerate} \end{proof} Replacing the matrices $M_1, M_2$ (and consequently $B_1, B_2$) with appropriate linear combinations, by Theorem \ref{thm:1}, we may from here on assume that $p_1 \in \{0,1\}$ and $p_2=0$. After this normalization, we aim to give a converse of Theorem \ref{thm:1}. \begin{rem}\label{rem:1} If two normalized Lie algebras $\mathfrak{g}_{p,B}$, and $ \mathfrak{g}_{\tilde p, \tilde B}$ are isomorphic, then $p_1 = \tilde p_1$ (i.e. $p=\tilde p$). In fact, if $ \Phi : \mathfrak{g}_{p,B} \mapsto \mathfrak{g}_{\tilde p, \tilde B}$ is a Lie algebra isomorphism, then $\Phi$ maps center onto center. Since $\mathfrak{g}_{ p, B}$ has trivial center when $p_1=1$, and center $V_Z$ when $p_1=0$, it immediately follows that $p_1=\tilde p_1$. \end{rem} This remark shows that the normalized Lie algebras $\mathfrak{g}_{p,B}$ need only be classified with respect to the various choices of $B$. \begin{thm}\label{thm:2} Let $ \Phi : \mathfrak{g}_{p,B} \mapsto \mathfrak{g}_{p, \tilde B}$ be an isomorphism of normalized Lie algebras mapping $V_H$ onto $V_H$. Then there exists $S \in Sp(n,\mathbb{R})$ so that, after replacing the matrices $\tilde M_1, \tilde M_2$ with suitable linear combinations, \begin{equation*} \tilde C_k = S C_k S^{-1}, \qquad k=1, 2, \end{equation*} with $C_k$ and $\tilde C_k$ given as in (\ref{eq:10}). \end{thm} \begin{proof} Suppose that $\Phi: V_H \mapsto V_H$. Then in light of Lemma \ref{lem1}, $\Phi$ has the matrix representation \begin{equation}\label{eq:14} \Phi \leftrightarrow \begin{bmatrix} E_{11} & 0 & 0 \\ E_{21} & E_{22} & 0 \\ E_{31} & E_{32} & E_{33} \end{bmatrix}, \end{equation} corresponding to the decomposition $ \mathfrak{g}_{ p, B}=V_M \oplus V_W \oplus V_Z$. Note that composing $\Phi$ with the automorphism $\Psi$ of $ \mathfrak{g}_{ p, \tilde B}$ given by the matrix \[ \Psi \leftrightarrow \begin{bmatrix} I_2 & 0 & 0 \\ 0 & \lambda \mathcal{J} & 0 \\ 0 & 0 & -\lambda^2 \end{bmatrix}, \qquad \text{resp.} \qquad \Psi \leftrightarrow \begin{bmatrix} I_2 & 0 & 0 \\ 0 & \lambda I_{2n} & 0 \\ 0 & 0 & \lambda^2 \end{bmatrix}, \] depending on the sign of $E_{33}$ and with $\lambda = |E_{33}|^{-1/2}$, we may assume that $E_{33}=1$. After a suitable change of basis in $V_{\tilde M}$, which affects the first column of matrix (\ref{eq:14}) only, we may assume that $E_{11}=I_2$. It is important to observe that this change of basis can be achieved without changing the values of $p_k$. This is clear when $p_1 =0$. On the other hand, suppose that $p_1 =1$. Now if $E_{11}= \left [ \begin{smallmatrix} e_{11} & e_{12} \\ e_{21} & e_{22} \end{smallmatrix} \right ]$, then $\Phi(M_k) =e_{1k} \tilde M_1 + e_{2k} \tilde M_2 + H_k$ for some $H_k \in V_H$ and it follows that for all $z \in \mathbb{R}$, \begin{equation}\label{eq:15} \left [ \Phi(M_k), \Phi(Z_z) \right ] = e_{1k} \left [ \tilde M_1 ,Z_{E_{33}z} \right ] + e_{2k} \left [ \tilde M_2 ,Z_{E_{33}z} \right ] + \left[ H_k, Z_{E_{33}z} \right ] = e_{1k} Z_{E_{33}z} \end{equation} while also \begin{equation}\label{eq:15a} \left [ \Phi(M_k), \Phi(Z_z) \right ] = \Phi\left ( \left [ M_k, Z_z \right ] \right ) = \Phi( \delta_{1,k} Z_z ) = \delta_{1,k} Z_{E_{33}z}. \end{equation} Comparing these two equations we obtain that \[ e_{11}=\delta_{1,1}=1, \qquad e_{12}=\delta_{1,2}=0. \] Now replacing $\tilde M_1$ with $\tilde M_1 + e_{21} \tilde M_2$ and then scaling $\tilde M_2$ we arrive at $E_{11}=I_2$, without changing the values of $ p_k$. The isomorphism $\Phi$ now has the form \[ \Phi \leftrightarrow \begin{bmatrix} I_2 & 0 & 0 \\ E_{21} & E_{22} & 0 \\ E_{31} & E_{32} & 1 \end{bmatrix} \] with $E_{22} \in GL_{2n}(\mathbb{R})$. It is easy to verify that a linear isomorphism determined by such a matrix preserves the Lie brackets if and only if \begin{gather} \llbracket E_{22} w, E_{22} \tilde w \rrbracket = \llbracket w,\tilde w \rrbracket \label{eq:22} \\ \tilde C_k = E_{22} C_k E_{22}^{-1} \label{eq:23}\\ \tilde C_1 E^{(2)}_{21} = \tilde C_2 E^{(1)}_{21} \notag \\ p_1 E^{(2)}_{31} + \llbracket E^{(1)}_{21}, E^{(2)}_{21} \rrbracket = p_2 E^{(1)}_{31} \notag \\ \llbracket E^{(k)}_{21}, w \rrbracket = E_{32}E_{22}^{-1} w \notag \end{gather} for $k=1, 2$ and $w, \tilde w \in \mathbb{R}^{2n}$, with $E^{(k)}_{21}$ and $E^{(k)}_{31}$ denoting the $k$-th columns of the matrices $E_{21}$ and $E_{31}$, respectively. These identities remain valid if we modify $\Phi$ so that $E_{21}=E_{31}=E_{32}=0$, that is \begin{equation}\label{eq:23:b} \Phi \leftrightarrow \begin{bmatrix} I_m & 0 & 0 \\ 0 & E_{22} & 0 \\ 0 & 0 & 1 \end{bmatrix}. \end{equation} Choosing $S=E_{22}$, the identities (\ref{eq:22}) and (\ref{eq:23}) now yield the assertion. \end{proof} We now investigate properties of Lie algebra isomorphisms between two normalized Lie algebras. Any isomorphism $ \Phi: \mathfrak{g}_{p,B} \mapsto \mathfrak{g}_{p,\tilde B}$ between two Lie algebras of this type can be represented in matrix form as \begin{equation}\label{eq:29c} \Phi \leftrightarrow \begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14}\\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34}\\ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix}, \end{equation} by using the decomposition $\mathfrak{g}_{p,B} =V_{M_1} \oplus V_{M_2} \oplus V_W \oplus V_Z$. Our goal is to show that $a_{13}=a_{23}=a_{14}=a_{24}=0$, which guarantees that $\Phi$ maps $V_H$ onto $V_H$. We begin with the following observation. \begin{lem}\label{lem2} Let $ \Phi: \mathfrak{g}_{p,B} \mapsto \mathfrak{g}_{p,\tilde B} $ be a Lie algebra isomorphism which has the matrix representation \begin{equation}\label{eq:30} \Phi \leftrightarrow \begin{bmatrix} a_{11} & 0 & 0 & 0\\ 0 & a_{22} & a_{23} & 0 \\ a_{31} & a_{32} & a_{33} & 0\\ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix}. \end{equation} Then $a_{23}=0$. \end{lem} \begin{proof} Since $\Phi$ maps the ideal $V_Z$ onto $V_Z$, it factors to a Lie algebra isomorphism $\hat \Phi : \mathfrak{h} = \mathfrak{g}_{p,B}/V_Z \simeq V_{M_1} \oplus V_{M_2} \oplus V_W \mapsto \tilde{\mathfrak{h}}= \mathfrak{g}_{p, \tilde B}/V_Z \simeq V_{\tilde M_1} \oplus V_{\tilde M_2} \oplus V_W$ whose matrix representation is \begin{equation*} \hat \Phi \leftrightarrow \begin{bmatrix} a_{11} & 0 & 0 \\ 0 & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}. \end{equation*} Let us set $\mathfrak{k}=V_{M_2} \oplus V_W$. Then $\hat \Phi$ maps $\mathfrak{k}$ onto $\tilde{\mathfrak{k}}$, and $V_W$ is an abelian ideal of codimension one in $\mathfrak{k}$, respectively $\tilde{\mathfrak{k}}$. We claim that $V_W$ is the unique such ideal. For suppose that $J$ is another abelian ideal of codimension one in $\mathfrak{k}$. Let $U_1,\dots,U_{2n}$ be a basis of $J$. Then each $U_i$ is of the form \[ U_i = \alpha_i M_2 + W_{w_i}, \quad \alpha_i \in \mathbb{R}, \ i=1,\dots,2n. \] If $\alpha_i=0$ for all $i$, the claim is proved, otherwise we may assume without loss of generality, that $\alpha_1=1$ and $\alpha_i=0$ for all $i \ge 2$. Now since $B_2 \neq 0$, there exist $x_o, y_o \in \mathbb{R}^n$ so that \begin{align*} [ U_1, X_{x_o} ] &= [ M_2 , X_{x_o}] = X_{B_2 x_o} \neq 0\\ [ U_1, Y_{y_o} ] &= [ M_2 , Y_{y_o}] = Y_{-B_2^{T} y_o} \neq 0. \end{align*} Since $J$ is abelian, it follows that $X_{x_o},Y_{y_o} \notin J$, contradicting the assumption that $\text{codim}(J)=1$. This proves the claim. From the claim it follows immediately that $\hat \Phi$ maps $V_W$ onto $V_W$, and hence that $a_{23}=0$. \end{proof} \begin{thm}\label{thm:5} Let $ \Phi: \mathfrak{g}_{p,B} \mapsto \mathfrak{g}_{p,\tilde B}$ be a Lie algebra isomorphism of normalized Lie algebras. Then $\Phi$ maps $V_{H}$ onto $V_{H}$. \end{thm} \begin{proof} We consider five distinct possibilities: $p_1=1$ and $B_2$ is nilpotent, $p_1=1$ and $B_2$ is not nilpotent, $p_1=0$ and none of $B_1$ and $B_2$ is nilpotent, $p_1=0$ and exactly one of $B_1$ and $B_2$ is nilpotent, and $p_1=0$ and both, $B_1$ and $B_2$ are nilpotent. As will be seen below, in each of the five cases, $\mathfrak{g}_{p,B}$ will have a different algebraic structure. Thus, two Lie algebras which are isomorphic via some isomorphism $\Phi$ must belong to the same of the five cases. \setitemize[1]{leftmargin=16pt} \setitemize[2]{leftmargin=12pt} \setitemize[3]{leftmargin=12pt} \begin{itemize} \item {\it Case 1: $p_1=1$ and $B_2$ is not nilpotent} Here, $\mathfrak{g}_{p,B}$ has nilradical $V_H$ of dimension $2n+1$. Since $\Phi$ maps nilradical to nilradical, it follows that $\mathfrak{g}_{p,\tilde B}$ has nilradical of dimension $2n+1$ as well, which thus must coincide with $V_H$. That is, $\Phi$ maps $V_H$ onto $V_H$. \item {\it Case 2: $p_1=1$ and $B_2$ is nilpotent} Here, $\mathfrak{g}_{p,B}$ has nilradical $V_{M_2} \oplus V_H$ of dimension $2n+2$. Since $p_1=1$, and the nilradical of $\mathfrak{g}_{p,\tilde B}$ has dimension $2n+2$ as well, it follows that $\tilde B_2$ is nilpotent and the nilradical of $\mathfrak{g}_{p,\tilde B}$ is $V_{\tilde M_2} \oplus V_H$. In addition, as $\Phi$ maps the center $V_Z$ of the nilradical onto the center $V_Z$ of the nilradical, it follows that $\Phi$ has matrix form \begin{equation}\label{eq:38} \Phi \leftrightarrow \begin{bmatrix} a_{11} & 0 & 0 & 0 \\ a_{21} & a_{22} & a_{23} & 0\\ a_{31} & a_{32} & a_{33} & 0\\ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix}. \end{equation} Replacing $\tilde M_1$ with a suitable $\tilde M_1 + \beta \tilde M_2$ we may assume that $a_{21}=0$. Applying Lemma \ref{lem2} it follows that $a_{23}=0$, that is, $\Phi$ maps $V_H$ onto $V_H$. \item {\it Case 3: $p_1=0$ and none of $B_1,B_2$ is nilpotent} Simply apply the argument of case~1. \item {\it Case 4: $p_1=0$ and one of $B_1,B_2$ is nilpotent} Without loss of generality, we may assume that $B_2$ is nilpotent, but $B_1$ is not. Then $\mathfrak{g}_{p,B}$ has nilradical $V_{M_2} \oplus V_H$ of dimension $2n+2$. Since $p_1=0$ and the nilradical of $\mathfrak{g}_{p,\tilde B}$ has dimensions $2n+2$, the latter algebra must again belong to case 4, so that replacing $\tilde B_1$ and $\tilde B_2$ by suitable linear combinations,$\mathfrak{g}_{p,\tilde B}$ will have nilradical $V_{\tilde M_2} \oplus V_H$. The remainder of the argument follows that of case 2. \item {\it Case 5: $p_1=0$ and both, $B_1$ and $B_2$ are nilpotent} Here, $\mathfrak{g}_{p,B}$ is itself nilpotent with center $V_Z$. Hence $\mathfrak{g}_{p,\tilde B}$ is also nilpotent and again belongs to case 5. Since $\Phi$ maps center to center, it has the form (\ref{eq:29c}) with $a_{14}=a_{24}=a_{34}=0$. We begin by considering the induced isomorphism $\hat \Phi : \mathfrak{h} \mapsto \tilde{\mathfrak{h}}$, \[ \hat \Phi \leftrightarrow \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}. \] Since $ V_W$ is an ideal of codimension two in $\mathfrak{h}$, then $\tilde I:=\hat \Phi(V_W)$ will be an abelian ideal of codimension two in $\tilde{\mathfrak{h}}$, that is, of dimension $2n$. We claim that $\tilde I=V_W$. Suppose to the contrary that $\tilde I \not =V_W$. Denoting by $P_o$ the projection of $\mathfrak{h}$ onto $V_{\tilde M}=V_{\tilde M_1} \oplus V_{\tilde M_2}$ and setting $V_o=P_o(\tilde I)$, we then obtain that $\dim(V_o) \in \{1,2 \}$. \begin{itemize} \item [$\circ$] {\it Subcase 5a: $\dim(V_o)=1$.} Then elements of $\tilde I$ are of the form \[ A = \alpha \tilde M_o + H ,\qquad \alpha \in \mathbb{R}, \ H \in V_W \] for some fixed nonzero $\tilde M_o=\text{diag}(0,\tilde B_o,0) \in V_{\tilde M}$. Fix one such $A$ with $\alpha =1$. Then there exist $x_o,y_o \in \mathbb{R}^n$ so that \begin{align*} [ A, X_{x_o} ] &=[ \tilde M_o, X_{x_o} ]= X_{\tilde B_o x_o} \neq 0 \qquad \text{and} \\ [ A, Y_{y_o} ] &= [ \tilde M_o, Y_{y_o} ] = Y_{-\tilde B_o^{T} y_o} \neq 0. \end{align*} Since $\tilde I$ has codimension two, it follows that $\mathfrak{h}= \tilde I \oplus <X_{x_o},Y_{y_o}>$ where $<X_{x_o},Y_{y_o}>$ denotes $\text{span}(X_{x_o},Y_{y_o})$. In fact, suppose $\alpha X_{x_o} +\beta Y_{y_o} \in \tilde I$ for some scalars $\alpha,\beta$ . Then \[ 0 = \left [ A, \alpha X_{x_o} + \beta Y_{y_o} \right ] = \alpha [ A, X_{x_o} ] + \beta [ A, Y_{y_o} ] = \alpha X_{\tilde B_o x_o} + \beta Y_{-\tilde B_o^{T} y_o} , \] which implies that $\alpha=\beta=0$. Now as $<X_{x_o},Y_{y_o}> \subseteq V_W$ we have \[ V_o = P_o( \tilde I) = P_o( \tilde I \oplus <X_{x_o},Y_{y_o}> ) = P_o( \mathfrak{h}) = V_{\tilde M} \] contradicting the fact that $V_o$ has dimension one. \item [$\circ$] {\it Subcase 5b: $\dim(V_o)=2$.} Then $V_o= V_{\tilde M}$. Note that by nilpotency of $\tilde B_1$ and $\tilde B_2$, all linear combinations $\alpha \tilde B_1 + \beta \tilde B_2$ have nontrivial null space. \begin{itemize} \item [$\diamond$] {\it Subcase 5b-1: there exists $\tilde B_o=\alpha \tilde B_1 + \beta \tilde B_2$ whose null space has dimension $\leq n-2$.} Set $\tilde M_o=\alpha \tilde M_1 + \beta \tilde M_2$ and pick any $A \in \tilde I$ with $P_o(A)=\tilde M_o$. By choice of $\tilde B_o$, there exist two elements $ x_1,x_2 \in \mathbb{R}^n$ with $[ \tilde M_o, X_{x_1} ] = X_{\tilde B_o x_1}$ and $[ \tilde M_o, X_{x_2} ] = X_{\tilde B_o x_2}$ linearly independent. Also, pick $y_1 \in \mathbb{R}^n$ with $[\tilde M_o, Y_{y_1}]= Y_{-\tilde B_o^{T} y_1} \neq 0$. We observe that $\tilde I + <X_{x_1},X_{x_2},Y_{y_1}>$ is a $2n+3$ dimensional subspace of $\mathfrak{h}$. In fact, suppose $ \alpha X_{x_1} +\beta X_{x_2} + \gamma Y_{y_1} \in \tilde I$ for some scalars $\alpha,\beta, \gamma $. Then \begin{align*} 0& = \left [ \tilde M_o, \alpha X_{x_1} +\beta X_{x_2} + \gamma Y_{y_1} \right ]\\ &= \alpha [ \tilde M_o, X_{x_1} ] + \beta [ \tilde M_o, X_{x_2} ] + \gamma [ \tilde M_o, Y_{y_1} ] \\ &= \alpha X_{\tilde B_o x_1} + \beta X_{\tilde B_o x_2} + \gamma Y_{-\tilde B_o^{T} y_1} \end{align*} from which it follows that $\alpha=\beta=\gamma=0$. This, however, contradicts the fact that $\mathfrak{h}$ has dimension $2n+2$. \item [$\diamond$] {\it Subcase 5b-2: the null spaces of all nonzero $ \alpha \tilde B_1 + \beta \tilde B_2$ have dimensions $n-1$.} Pick elements $A_1 = \tilde M_1 + H_1$ and $A_2 = \tilde M_2 + H_2$ ($H_1,H_2 \in V_W$) of $\tilde I$. Since \begin{equation}\label{eq:44} \text{ad}( A_i)(X_x)=X_{\tilde B_i x} \qquad \text{and} \qquad \text{ad}(A_i)(Y_y)=Y_{-\tilde B_i^{T} y}, \qquad i=1,2, \end{equation} it follows that $\ker(\text{ad}(A_1))$ and $\ker(\text{ad}(A_2))$ both have codimensions of at least 2 in $\mathfrak{h}$. In addition, since $\tilde I$ is abelian, then $\tilde I \subseteq \ker(\text{ad}(A_1)) \cap \ker(\text{ad}(A_2))$. Comparing dimensions, if follows that $ \tilde I = \ker(\text{ad}(A_1)) = \ker(\text{ad}(A_2)) $. Now (\ref{eq:44}) shows that $\ker(\text{ad}(A_i)_{|V_W})$ splits into subspaces $V_{X_o}$ and $V_{Y_o}$ of $V_X$, respectively $V_Y$, of codimensions one. Hence we can decompose $V_X$ and $V_Y$ as direct sums \begin{equation}\label{eq:45} V_X = V_{X_o } \oplus < X_{x_o} > , \qquad V_Y = V_{Y_o} \oplus <Y_{y_o}> \end{equation} of subspaces. Here we have chosen the vectors $x_o$ and $y_o$ so that $x_o \perp X_o$ and $y_o \perp Y_o$ in $\mathbb{R}^n$ with respect to the usual inner product. Now since $X_o = \ker(\tilde B_i)$ and also $Y_o = \ker(\tilde B_i^{T})=\text{range}(\tilde B_i)^{\perp}$ ($i=1,2$), it follows that, after expressing the common domain space as $X_o \oplus <x_o>$ and the common range space as $Y_o \oplus <y_o>$, the matrices $\tilde B_i$ take the form \[ \tilde B_i = \begin{bmatrix} 0 & 0 \\ 0 & b_i \end{bmatrix} \] for scalars $b_1$ and $b_2$, contradicting the linear independence of the two matrices. \end{itemize} \end{itemize} Thus, the claim is proved. It follows immediately that $a_{13}=a_{23}=0$. Since $\Phi$ maps center $V_Z$ onto center $V_Z$, then also $a_{14}=a_{24}=a_{34}=0$. That is, $\Phi$ maps $V_H$ onto $V_H$. \end{itemize} This completes the proof. \end{proof} Combining Theorems \ref{thm:1}, \ref{thm:2}, \ref{thm:5} and Remark \ref{rem:1}, we arrive at: \begin{cor} Two normalized Lie algebra $\mathfrak{g}_{p,B}$ and $\mathfrak{g}_{\tilde{p},\tilde{B}}$ are isomorphic if and only if \begin{enumerate} \item $p=\tilde p$, and \item there exists $S \in Sp(n,\mathbb{R})$ so that, after replacing the matrices $\tilde M_1, \tilde M_2$ with a suitable basis of $V_{\tilde M}$, \begin{equation* \tilde C_k = S C_k S^{-1}, \qquad k=1, 2, \end{equation*} with $C_k$ and $\tilde C_k$ given as in (\ref{eq:10}). \end{enumerate} \end{cor} Table \ref{tab:1} lists the equivalence classes of all Lie algebras $\mathfrak{g}_{p,B}$ in the lowest dimensions, namely for $n=1,2$. We note that the non-nilpotent cases can also be obtained from the list in \cite{Rubin}. \begin{center} \begin{table}[ht] \caption{Equivalence classes of $\mathfrak{g}_{p,B}$ for $n=1,2$.} \label{tab:1} \[ \begin{array}{l|ccll} \hline & B_1 & B_2 & \text{Range of parameters} & \text{Remarks} \\ \hline \hline n=1 &&&\\ \quad p=0 & \text{---} & \text{---} & & \text{none exists}\\[8pt] \quad p=1 & \begin{bmatrix} \frac{1}{2} \end{bmatrix} & \begin{bmatrix} 1 \end{bmatrix} & \\[8pt] \hline n=2 &&&\\ \quad p=0 & \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} & \\[16pt] & \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} & \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} & & \text{$B_2$ is nilpotent} \\[16pt] & \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} & \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} & \\[20pt] \quad p=1 & \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & b \end{bmatrix} & \begin{bmatrix} 1 & 0 \\ 0 & d \end{bmatrix} & \begin{array}{l} b > \frac{1}{2}, \ 0 \leq |d| \leq 1 \\[3pt] b = \frac{1}{2}, \ 0 \leq d \leq 1 \end{array} & \\[16pt] & \begin{bmatrix} \frac{1}{2} & 1 \\ 0 & \frac{1}{2} \end{bmatrix} & \begin{bmatrix} 1 & d \\ 0 & 1 \end{bmatrix} & d \ge 0& \\[16pt] & \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} & \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} & \\[16pt] & \begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix} & \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} &a \ge \frac{1}{2} & \text{$B_2$ is nilpotent} \\[16pt] & \begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix} & \begin{bmatrix} c & 1 \\ -1 & c \end{bmatrix} &a \ge \frac{1}{2} , \ c \ge 0 & \\[16pt] & \begin{bmatrix} \frac{1}{2} & b \\ -b & \frac{1}{2} \end{bmatrix} & \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} &b > 0 & \text{case $b=0$ covered above} \\[12pt] \hline \hline \end{array} \] \end{table} \end{center} \section{Representations of the Groups $G_{p,B}$} In this section we show that the groups $G_{p,B}$ can be represented as subgroups of both, the symplectic group $Sp(n+1,\mathbb{R})$, as well as the affine group $\text{\it Aff}(n+1)$. Thus, they posses both, a metaplectic and a wavelet representation. We also show that the metaplectic representation is equivalent to a sum of two copies of a subrepresentation of the wavelet representation. \subsection{Preliminaries} \textbf{Notation.} Throughout, symbols $ x ,y$ will denote vectors in Euclidean space $\mathbb{R}^n$ written as column vectors, while symbols $ \xi,\eta $ will denote elements in the Euclidean space written as row vectors. For ease of distinction, we denote the space of row vectors by $\widehat{\mathbb{R}^n}$. The transpose of a vector or matrix $x$ is denoted by $\trp{x}$, hence the inner product in $\mathbb{R}^n$ is $x \cdot y = \trp{y} x $. The \emph{Fourier transform} is given by \[ \hat f ( \xi ) = \int_{\mathbb{R}^n} f( x ) e^{-2i \pi \xi x } \, d x \] for $f \in L^1(\mathbb{R}^n)$ and $ \xi \in \widehat{\mathbb{R}^n}$. The restriction of the map $f \mapsto \hat f$ to $(L^1 \cap L^2)(\mathbb{R}^n)$ extends to a unitary operator $\mathcal F: f \in L^2(\mathbb{R}^n) \mapsto \hat f \in L^2(\widehat{\mathbb{R}^n})$ which is also called the Fourier transform. \emph{Translation} and \emph{modulation} define two unitary representations of $\mathbb{R}^n$ on $L^2(\mathbb{R}^n)$ by \[ (T_xf)(y)=f(y-x) \qquad \text{and} \qquad (E_{x} f)(y) = e^{2i\pi \trp{x} y } f(y), \] and the corresponding operators on $L^2(\widehat{\mathbb{R}^n})$ are defined similarly, \[ (\hat T_{x}g)(\xi)=g(\xi- \trp{x} ) \qquad \text{and} \qquad (\hat E_{x} g)(\xi) = e^{2i\pi \xi x } g(\xi), \] for $x,y \in \mathbb{R}^n$, $\xi\in \widehat{\mathbb{R}^n}$, $f \in L^2(\mathbb{R}^n)$ and $g \in L^2(\widehat{\mathbb{R}^n})$. The natural representations of $GL_n(\mathbb{R})$ on these spaces are given by left and right \emph{dilation}, respectively, \[ (S_a f)(y) = | \det a|^{-1/2} f(a^{-1}y) \qquad \text{and} \qquad (\hat S_a g)(\xi) = | \det a|^{1/2} g(\xi a ), \] for $a \in GL_n(\mathbb{R})$. Observe that \begin{equation}\label{eq;conj} \hat E_{-x}=\mathcal{F} T_x \mathcal{F}^{-1}, \quad \hat T_{x} = \mathcal{F} E_{x} \mathcal{F}^{-1} \quad \text{and} \quad \hat S_a = \mathcal{F} S_a \mathcal{F}^{-1}. \end{equation} By isomorphism of groups we will always mean an isomorphism of topological groups. \medskip\noindent \textbf{The affine group and the wavelet representation.} The \emph{affine group} $\textit{Aff}(n,\mathbb{R})$ is the group formed by the invertible linear transformations and translations in Euclidean space. It takes the form of a semi-direct product $\mathbb{R}^n \rtimes_{\alpha} GL_n(\mathbb{R})$, where the action $\alpha$ is simply matrix multiplication, $ \alpha_{a}(x) = ax$ for $x \in \mathbb{R}^n$, $a \in GL_n(\mathbb{R})$. Thus the group operation is \[ (x,a) (\tilde x, \tilde a)=(x+a \tilde x, a \tilde a) \] for $(x,a),(\tilde x,\tilde a) \in \textit{Aff}(n,\mathbb{R})$. If $H$ is a closed subgroup of $GL_n(\mathbb{R})$, then the corresponding subgroup of the affine group can be represented as the matrix group \[ \mathbb{R}^n \rtimes_{\alpha} H \cong \left \{ \, \begin{bmatrix} a & x \\ 0 & 1 \end{bmatrix} \, : \, x \in \mathbb{R}^n, \ a \in H \, \right \} \subset GL_{n+1}(\mathbb{R}). \] There is a natural unitary representation $\pi$ of such subgroups on $L^2(\mathbb{R}^n)$, called the \emph{wavelet representation} and determined by translations and left dilations, \[ \qquad\qquad\pi(x,a) = T_x S_a, \qquad (x,a) \in \mathbb{R}^n \rtimes_{\alpha} H . \] Conjugating by the Fourier transform, (\ref{eq;conj}) yields an equivalent representation $\hat \pi$ on $L^2(\widehat{\mathbb{R}^n})$ given by \begin{equation}\label{rep:wav2} \qquad \hat \pi(x,a) = \hat E_{-x} \hat S_a, \qquad (x,a) \in \mathbb{R}^n \rtimes_{\alpha} H . \end{equation} \medskip\noindent \textbf{The symplectic group and the metaplectic representation.} The symplectic group and its representations has been extensively studied in \cite{Folland} and \cite{Grochenig}. The matrix $\mathcal{J}$ is one of its elements, and the additive group $\textit{Sym}(n,\mathbb{R})$ of symmetric $n \times n$ matrices as well as $GL_n(\mathbb{R})$ are naturally embedded in $Sp(n,\mathbb{R})$ in form of the closed subgroups \begin{equation}\label{eq:NL} \begin{split} N = \left\{ \mathcal{N}_m := \begin{bmatrix} I_n & 0 \\ m & I_n \\ \end{bmatrix} : \, m \in \textit{Sym}(n,\mathbb{R}) \right \}, \quad L = \left\{ \mathcal{L}_a := \begin{bmatrix} a & 0 \\ 0 & \trp{ (a^{- 1}) } \\ \end{bmatrix} : \, a \in GL_n(\mathbb{R}) \right\}, \end{split} \end{equation} and $Sp(n, \mathbb{R})$ is generated by $L \cup N \cup \left\{ \mathcal{J} \right\}$. There is a projective representation $\mu$ of $Sp(n,\mathbb{R})$ on $L^2(\mathbb{R}^n)$ called the \emph{metaplectic representation} which, for the three types of generating matrices, is given by \begin{align*} \mu \left(\mathcal{L}_a \right) = S_a, \qquad \mu \left(\mathcal{N}_m \right) = U_m, \qquad \mu (-\mathcal{J}) = \left( { - i} \right)^{n/2} \mathcal{F}, \end{align*} where $U_m$ is a \emph{chirp}, \[ \, (U_m f)( q ) = e^{i \pi \trp{q} m q }f( q ) \, \] for $f \in L^2(\mathbb{R}^n)$ and $ q \in \mathbb{R}^n$. \medskip \noindent \textbf{Subgroups of the symplectic group which posses a wavelet representation.} We next consider a class of subgroups of $Sp(n,\mathbb{R})$ which arise as semidirect products of a vector group with a group of dilations. There is a natural linear action $\alpha$ of $GL_n(\mathbb{R})$ on the vector space $\textit{Sym}(n,\mathbb{R})$ given by \begin{equation}\label{motiv:act} \alpha_a(m) = \trp{(a^{-1})} m a^{-1} \qquad\qquad \big (a \in GL_n(\mathbb{R}), \ m \in Sym(n,\mathbb{R}) \big ). \end{equation} Let $E$ be a closed subgroup $GL_n(\mathbb{R})$ and $M$ an $E$-invariant linear subspace of $\textit{Sym}(n,\mathbb{R})$. As can be seen from (\ref{eq:NL}), $M$ and $E$ are isomorphic to closed subgroups of $Sp(n,\mathbb{R})$, and the action $\alpha$ is implemented by conjugation under this isomorphism, \[ \mathcal L_a \mathcal N_m \mathcal L_a ^{-1} = \mathcal N_{ \trp{(a^{-1})} m a^{-1}}. \] Consequently, the semidirect product $M \rtimes_{\alpha} E$ is isomorphic to a closed subgroup of $Sp(n,\mathbb{R})$, \begin{equation}\label{eq:meta:mat} M \rtimes_{\alpha} E \cong K:= \left \{ \mathcal N_m \mathcal L_a = \begin{bmatrix} a & 0 \\ m a & \trp{( a^{-1} )} \end{bmatrix}\ : \ m \in M, \ a \in E \right \} . \end{equation} The restriction of the metaplectic representation to $K$, which we simply call the metaplectic representation of $M \rtimes_{\alpha} E$, is given by \begin{equation}\label{eq3301} \qquad \mu(m,a) := \mu ( \mathcal N_m \mathcal L_a) = U_m S_a, \qquad\qquad ( \, (m,a) \in M \rtimes_{\alpha} E \, ), \end{equation} and it is a proper representation, that is, a group homomorphism. \medskip The groups $M \rtimes_{\alpha} E$ have a wavelet representation as well. In fact, identify $M$ with Euclidean space $\mathbb{R}^d$ by fixing a basis. Since the action $\alpha$ is by invertible linear transformations, there exists a continuous homomorphism $\varphi : a \mapsto h_a$ of $E$ onto a (not necessarily closed) subgroup $H$ of $GL_d(\mathbb{R})$ satisfying \[ \alpha_a(m) = h_a m \qquad (m \in \mathbb{R}^d, \ a \in E ) ,\] which naturally extends to a group homomorphism $\varphi$ of $M \rtimes_{\alpha} E$ onto the subgroup $\mathbb{R}^d \rtimes_{\alpha} H$ of $\textit{Aff}(d,\mathbb{R})$ by \begin{equation}\label{eq:phi} \ \varphi(m,a) = (m,h_a). \ \end{equation} (For ease of notation, we will denote these semi-direct products simply by $M \rtimes E$ and $\mathbb{R}^d \rtimes H$.) Now composition of the homomorphism $\varphi$ with the wavelet representation (\ref{rep:wav2}) in Fourier space yields a wavelet representation of $M \rtimes E$ on $L^2(\widehat{\mathbb{R}^d})$, also denoted by $\hat\pi$, and given by \begin{equation}\label{eq:metwav} \hat\pi( m,a) = \hat E_{-m} \hat S_{h_a}. \end{equation} \subsection{The groups $G_{p,B}$ are subgroups of $Sp(n+1,\mathbb{R})$ and $A\!f\!f\!(n+1,\mathbb{R})$} We now show that the each group $G_{p,B}$ can be represented as a subgroup of the form $M \rtimes E $ of the symplectic group, and $\mathbb{R}^{n+1} \rtimes H$ of the affine group as discussed above. We will impose the assumptions (M1) and (M2) of the section 2, which ensures that each group $G_{p,B}$ can be represented as a matrix group of the form (\ref{eq:8}). From now on, $M$ will denote the $n+1$ dimensional vector subspace of $\textit{Sym}(n+1,\mathbb R)$, \begin{equation}\label{eq:101} M = \left\{ {m(z, x): = \left[ {\begin{array}{*{20}c} -z & { -x^T } \\ { -x} & 0 \\ \end{array} } \right]\,:\, x \in \mathbb{R}^n , \ z \in \mathbb{R}} \right\} \end{equation} and $E=E_{p,B}$, the closed subgroup of $GL_{n+1}(\mathbb R)$, \begin{equation* E_{p,B} = \left\{ a(t, y ) : = \begin{bmatrix} 1 & 0 \\ - \frac{1}{2} y & I_n \end{bmatrix} \begin{bmatrix} e^{-pt/2} & 0 \\ 0 & e^{pt/2} \left [e^{-Bt}\right ]^{T} \end{bmatrix} \ : \ t \in \mathbb{R}^2, \ y \in \mathbb{R}^n \right\}. \end{equation*} The group law in $E_{p,B}$ is \begin{equation}\label{sec5:lawd} a(t,y) a( \tilde t, \tilde y )= a(t+\tilde t,y + e^{pt/2} \left [e^{-Bt}\right ]^{T} \tilde y) . \end{equation} Now $M$ is invariant under the $E_{p,B}$-action (\ref{motiv:act}), in fact \begin{equation}\label{eq:104} \alpha_{a(t,y)}(m(z,x))=(a(t,y)^{-1})^T m(z,x) a(t,y)^{-1} = m(e^{pt}z + y^T e^{Bt}x, e^{Bt}x). \end{equation} By (\ref{eq:meta:mat}), the semi-direct product $ M \rtimes E_{p,B}$ can be identified with a closed subgroup of $Sp(n+1,\mathbb{R})$, \[ M \rtimes E_{p,B} \cong K_{p,B} := \left\{ k(t, x, y,z) = \begin{bmatrix} a(t, y) & 0 \\ m(z, x)a(t, y) \ & \left( a(t, y) ^{-1} \right )^T \end{bmatrix}\ :\ z \in \mathbb{R},\, t \in \mathbb{R}^2, \, x, y \in \mathbb{R}^n \right \} \] with the group law \begin{equation* k(t, x, y, z) \, k(\tilde t, \tilde x, \tilde y, \tilde z) = k(t + \tilde t, x + e^{Bt} \tilde x, y + e^{pt} \left [ e^{- B t}\right ]^{T}\tilde y,z + e^{pt} \tilde z + y^T e^{Bt}\tilde x) \end{equation*} which is the same as the group law (\ref{eq:4}) of $G_{p,B}$. It is now easy to see that the matrix groups $G_{p,B}$ and $K_{p,B}$ are isomorphic. Next we compute the homomorphism $\varphi : M \rtimes E_{p,B} \to \mathbb{R}^{n+1} \rtimes H$ of (\ref{eq:phi}). By (\ref{eq:101}), the vector space $M$ is naturally identified with $\mathbb{R}^{n+1}$ via the map $ m(z,x) \mapsto \left ( \begin{smallmatrix} z \\ x \end{smallmatrix} \right )$. Equation (\ref{eq:104}) now shows that under this identification, \[ h_{a(t,y)} \begin{pmatrix} z \\ x \end{pmatrix} = \begin{pmatrix} e^{pt} z + y^{T} e^{Bt} x \\[2pt] e^{Bt}x \end{pmatrix}, \] so that \[ H = H_{p,B} = \left \{ h_{a(t,y)} = \begin{bmatrix} e^{pt} & y^{T} e^{Bt} \\ 0 & e^{Bt} \end{bmatrix} : \, t \in \mathbb{R}^2, \ y \in \mathbb{R}^n \, \right \}. \] We observe that by assumptions (M1)--(M2), this group is closed in $GL_{n+1}(\mathbb{R})$, and the map $ \varphi : E_{p,B} \to H_{p,B}$ is an isomorphism of matrix groups. Hence, \begin{align*} G_{p,B} \cong M \rtimes E_{p,B} \cong \mathbb{R}^{n+1} \rtimes H_{p,B} &= \left \{ (m,h_a) : m \in \mathbb{R}^{n+1}, \, h_a \in H_{p,B} \right \} \\[4pt] &\cong \left \{ \, \begin{bmatrix} h_{a(t,y)} & \left ( \begin{smallmatrix} z \\ x \end{smallmatrix} \right ) \\[2pt] 0 & 1 \\ \end{bmatrix} \ : \ z \in \mathbb{R}, \, t \in \mathbb{R}^2, \, x,y \in \mathbb{R}^n \, \right\} . \end{align*} which is a closed subgroup of $\textit{Aff}(n+1,\mathbb{R})$. \subsection{The symplectic and wavelet representations of the groups $G_{p,B}$} By (\ref{eq:metwav}), the wavelet representation of $G_{p,B} \cong \mathbb{R}^{n+1} \rtimes H_{p,B}$ in Fourier space is given by \[ \hat \pi \bigl ( \, g( t, x, y, z ) \, \bigr ) = \hat E_{- \left (\begin{smallmatrix} z \\ x \end{smallmatrix} \right )} \hat S_{h_{a(t,y)}}, \] that is, \begin{equation}\label{eq:wav:2} \left [ \hat \pi \bigl ( g ( t, x, y, z ) \bigr )f \right]( r , \xi ) = \delta(t)^{1/2} \, e^{pt/2} e^{-2i\pi( rz + \xi x )} f \left ( r e^{pt} , ( r \trp{y} + \xi ) e^{Bt} \right ) \end{equation} for $f \in L^2( \widehat{\mathbb{R}^{n+1}})$, $r \in \mathbb{R}$, $\xi \in \widehat{\mathbb{R}^n}$ and $\delta(t) = \det\left ( {e^{Bt}} \right )=e^{\textit{tr}(Bt)}$. Clearly, $\widehat{ \mathbb{R}^{n+1}}$ decomposes measurably into the two $H_{p,B}$-invariant open half spaces \[ \mathcal{O}_+ = \{ (r,\xi ) : r>0 \} \qquad \text{and} \qquad \mathcal{O}_- = \{ (r,\xi ) : r<0 \} . \] It thus can be seen from (\ref{eq:wav:2}) that $L^2(\mathcal{O}_{+})$ and $L^2(\mathcal{O}_{-})$ are both $\hat \pi$-invariant subspaces of $L^2( \widehat{\mathbb{R}^{n+1}})$ and consequently, the wavelet representation $\hat \pi$ splits into the direct sum $\hat \pi = \hat \pi_+ \oplus \hat \pi_{-}$ of the two subrepresentations $\hat \pi_{\pm}$ obtained by restricting $\hat \pi$ to these two invariant subspaces. Similarly, by (\ref{eq3301}), the metaplectic representation of the group $G_{p,B} \cong M \rtimes E_{p,B}$ is given by \[ \mu\left (\, g(t, x ,y,z)\, \right ) = U_{m(z, x )} S_{a(t,y)} . \] Since for each vector $q=\left ( \begin{smallmatrix} u \\ v \end{smallmatrix} \right ) \in \mathbb{R}^{n+1}$, $u \in \mathbb{R}$, $v \in \mathbb{R}^n$ we have \begin{equation}\label{eq:83} \trp{q} m(z,x) q = (u, \trp{v}) m(z, x ) \left ( \begin{smallmatrix} u \\ v \end{smallmatrix} \right ) = -(u^2 z+ 2u \trp{v} x ) , \end{equation} it follows that \begin{equation}\label{group:meta1} \begin{split} \mu\bigl (\, g(t, x ,y,z)\, \bigr ) f \left ( \begin{matrix} u \\ v \end{matrix} \right ) = \delta(t)^{1/2} e^{pt(1-n)/4} e^{- i \pi (u^2 z+ 2 u \trp{v} x )} f\begin{pmatrix} e^{pt/2}u \\ e^{-pt/2} \trp{\left [e^{Bt} \right] } ( \frac{u}{2} y + v ) \end{pmatrix} \end{split} \end{equation} for $f \in L^2(\mathbb{R}^{n+1})$, with $\delta(t) = \det{\left (e^{Bt}\right )}$. Clearly, $\mathbb{R}^{n+1}$ splits measurably into two $E_{p,B}$-invariant open half spaces \[ \mathcal{U}_+ = \{ \left ( \begin{smallmatrix} u \\ v \end{smallmatrix} \right ) \ : \ u > 0 \} \quad \text{and} \quad \mathcal{U}_{-} = \{ \left ( \begin{smallmatrix} u \\ v \end{smallmatrix} \right ) \ : \ u< 0 \}. \] It can be seen from (\ref{group:meta1}) that $L^2(\mathcal{U}_{+})$ and $L^2(\mathcal{U}_{-})$ are both $\mu$-invariant subspaces of $L^2( \mathbb{R}^{n+1})$. Hence, $\mu$ splits into the direct sum $\mu = \mu_+ \oplus \mu_{-}$ of the two subrepresentations $\mu_{\pm}$ obtained by restricting $\mu$ to each of the two invariant subspaces $L^2(\mathcal{U}_{\pm})$. We next obtain a connection between these representations, by employing the the techniques developed in \cite{Cordero,DMari,Namngam2}. \begin{prop} The subrepresentations $\mu_+$ and $\mu_-$ are both equivalent to $\hat \pi_+$. \end{prop} \begin{proof} Observe that for each $q \in \mathbb{R}^{n+1}$, the map \[ \begin{pmatrix} z \\ x \end{pmatrix} \mapsto q^T m(z,x) q \] defines a linear functional on $\mathbb{R}^{n+1}$. Hence there exists a unique $\Psi(q) \in \widehat{\mathbb{R}^{n+1}}$ so that \[ q^T m(z,x) q = - 2 \Psi(q) \begin{pmatrix} z \\ x \end{pmatrix} \] for all $z \in \mathbb{R}$, $x \in \mathbb{R}^n$. In fact, equation (\ref{eq:83}) shows that \[ \Psi(q) = \Psi \! \begin{pmatrix} u \\ v \end{pmatrix} = \Bigl ( \textstyle \frac{1}{2} u^2, u \trp{v} \Bigr ). \] We observe that $\Psi$ is smooth with Jacobian determinant \[ J_{\Psi}\! \begin{pmatrix} u \\ v \end{pmatrix} = u^{n+1} \] which does not vanish on the open half planes $\mathcal{U}_+$ and $\mathcal{U}_{-}$. In fact, the restrictions of $\Psi$ to these sets constitute diffeomorphisms \[ \Psi_+ : \mathcal{U}_+ \to \mathcal{O}_+ \qquad \text{and} \qquad \Psi_{-} : \mathcal{U}_- \to \mathcal{O}_+ , \] respectively. Furthermore, for $(r, \xi) \in \mathcal{O}_+ \subset \widehat{\mathbb{R}^{n+1}}$ with $r \in \mathbb{R}$, $\xi \in \widehat{\mathbb{R}^n}$ we have \[ \Psi_{\pm}^{-1} (r, \xi) = \begin{pmatrix} \pm \sqrt{2r} \\ \pm \frac{1}{\sqrt{2r}} \, \trp{\xi} \end{pmatrix} \qquad \text{and} \qquad J_{\Psi_{\pm}^{-1}}(r,\xi) =\pm \left ( 2r \right )^{-(n+1)/2} . \] It follows that the operators \[ Q_+: L^2(\mathcal{O}_+) \to L^2(\mathcal{U}_+) \qquad \text{and} \qquad Q_-: L^2(\mathcal{O}_+) \to L^2(\mathcal{U}_-) \] defined by \[ \qquad\qquad (Q_{\pm}f)(q) = \left | J_{\Psi}(q) \right |^{1/2} f \left ( \Psi(q) \right ) \qquad\qquad ( f \in L^2(\mathcal{O}_+), q \in \mathcal{U}_\pm ) \] constitute Hilbert space isomorphism, whose inverses are given by \[ \qquad\qquad (Q_{\pm}^{-1}f)(\eta) = \left | J_{\Psi_{\pm}^{-1}}(\eta) \right | ^{1/2} f \left ( \Psi_{\pm}^{-1}(\eta) \right ) \qquad\qquad ( f \in L^2(\mathcal{U}_\pm), \eta \in \mathcal{O}_+ ). \] We complete the proof by showing that \[ \mu_{\pm} = Q_{\pm} \hat \pi_+ Q_{\pm}^{-1}. \] In fact, for all $f \in L^2(\mathcal{U}_{\pm})$ and $q= \left ( \begin{smallmatrix} u \\ v \end{smallmatrix} \right ) \in \mathbb{R}^{n+1}$ we have \begin{align*} \Bigl [ Q_{\pm} &\hat \pi_+(t,x,y,z) Q_{\pm}^{-1} f \Bigr ] (q) = \left | J_{\Psi} \! \left ( \begin{smallmatrix} u \\ v \end{smallmatrix} \right ) \right |^{1/2} \left [ \hat \pi_+(t,x,y,z) Q_{\pm}^{-1} f \right ] \left ( \Psi \left ( \begin{smallmatrix} u \\ v \end{smallmatrix} \right ) \right ) \\ &= |u|^{(n+1)/2} \delta(t)^{1/2} e^{pt/2} e^{-2i \pi \left ((u^2/2) z + u \trp{v}x \right ) } \left [ Q_{\pm}^{-1} f \right ] \! \left ( \textstyle \frac{u^2}{2} e^{pt}, \bigl ( \frac{u^2}{2}\trp{y} + u\trp{v} \bigr ) e^{Bt} \right ) \\ &= \delta(t)^{1/2} |u|^{(n+1)/2} e^{pt/2} e^{-i \pi \left (u^2 z + 2 u\trp{v}x \right ) } \left | \pm u^2 e^{pt} \right |^{-(n+1)/4} f \! \left ( \begin{matrix} \pm \sqrt{u^2 e^{pt}} \\ \pm \frac{ 1}{\sqrt{ u^2 e^{pt}} } \trp{[ e^{Bt}]} \left (\frac{u^2}{2}y+ uv \right ) \end{matrix} \right ) \\ &= \delta(t)^{1/2} e^{pt(1-n)/4} e^{-i \pi \left (u^2 z + 2 u\trp{v}x \right ) } f \! \left ( \begin{matrix} e^{pt/2} u \\ e^{-pt/2} \trp{[ e^{Bt}]} \left (\frac{u}{2}y+v \right ) \end{matrix} \right ) \end{align*} which is precisely (\ref{group:meta1}). \end{proof} It now follows immediately that the metaplectic representation $\mu$ of $G_{p,B}$ is equivalent to the sum of two copies of $\hat \pi_+$, \[ \mu = \mu_+ \oplus \mu_{-} \simeq \hat \pi_+ \oplus \hat \pi_+. \] \bigskip \noindent{\bf Acknowledgements} A.S. is grateful to the Development and Promotion of Science and Technology Talents project (DPST) for their constantly support.
1908.10158
\section{Introduction}\label{sec:introduction} Clinical trials often aim to compare the effects of two treatments. To ensure clinical relevance of these comparisons, trials are typically designed to form a comprehensive picture of the treatments by including multiple outcome variables. Collected data about efficacy (e.g. reduction of disease symptoms), safety (e.g. side effects), and other relevant aspects of new treatments are combined into a single, coherent decision regarding treatment superiority. An example of a trial with multiple outcomes is the CAR-B (Cognitive Outcome after WBRT or SRS in Patients with Brain Metastases) study, which investigated an experimental treatment for cancer patients with multiple metastatic brain tumors \cite{schimmel2018}. Historically, these patients have been treated with radiation of the whole brain (Whole Brain Radiation Therapy; WBRT). This treatment is known to damage healthy brain tissue and to increase the risk of (cognitive) side effects. More recently, local radiation of the individual metastases (stereotactic surgery; SRS) has been proposed as a promising alternative that saves healthy brain tissue and could therefore reduce side effects. The CAR-B study compared these two treatments based on cognitive functioning, fatigue, and several other outcome variables \cite{schimmel2018}. Statistical procedures to arrive at a superiority decision have two components: 1) A statistical model for the collected data; and 2) A decision rule to evaluate the treatment in terms of superiority based on the modelled data. Ideally, the combination of these components forms a decision procedure that satisfies two criteria: Decisions should be clinically relevant and efficient. Clinical relevance ensures that the statistical decision rule corresponds to a meaningful superiority definition, given the clinical context of the treatment. Commonly used decision rules define superiority as one or multiple treatment difference(s) on the most important outcome, on any of the outcomes, or on all of the outcomes \cite{FDA2017, Murray2016,Sozu2012,Sozu2016}. Efficiency refers to achieving acceptable error rates while minimizing the number of patients in the trial. The emphasis on efficiency is motivated by several considerations, such as small patient populations, ethical concerns, limited access to participants, and other difficulties to enroll a sufficient number of participants \cite{VandeSchoot2020}. In the current paper, we address clinical relevance and efficiency in the context of multiple binary outcomes and propose a framework for statistical decision-making. In trials with multiple outcomes, it is common to use a univariate modeling procedure for each individual outcome and combine these with one of the aforementioned decision rules \cite{FDA2017,Murray2016}. Such decision procedures can be inefficient since they ignore the relationships between outcomes. Incorporating these relations in the modeling procedure is crucial as they directly influence the amount of evidence for a treatment difference as well as the sample size required to achieve satisfactory error rates. A multivariate modeling procedure takes relations between outcomes into account and can therefore be a more efficient and accurate alternative when outcomes are correlated. Another interesting feature of multivariate models is that they facilitate the use of decision rules that combine multiple outcomes in a flexible way, for example via a compensatory mechanism. Such a mechanism is characterized by the property that beneficial effects are given the opportunity to compensate adverse effects. The flexibility of compensatory decision-making is appealing, since a compensatory mechanism can be naturally extended with impact weights that explicitly take the clinical importance of individual outcome variables into account \cite{Murray2016}. With impact weights, outcome variables of different importances can be combined into a single decision in a straightforward way. Compensatory rules do not only contribute to clinical relevance, but also have the potential to increase trial efficiency. Effects on individual outcomes may be small (and seemingly unimportant) while the combined treatment effect may be large (and important) \cite{OBrien1984,Tang1989,Pocock1987}, as visualized in Figure \ref{fig:efficiency} for fictive data of the CAR-B study. The two displayed bivariate distributions reflect the effects and their uncertainties on cognitive functioning and fatigue for SRS and WBRT. The univariate distributions of both outcomes overlap too much to clearly distinguish the two treatments on individual outcome variables or a combination of them. The bivariate distributions however clearly distinguish between the two treatments. Consequently, modeling a compensatory treatment effect with equal weights (visualized as the diagonal dashed line) would provide sufficient evidence to consider SRS superior in the presented situation. % \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth,keepaspectratio]{Figure1-eps-converted-to-1.jpg}\\ \caption{Separation of two bivariate distributions (diagonally) versus separation of their univariate distributions (horizontally/vertically) for the CAR-B study. The dashed diagonal line represents a Compensatory decision rule with equal weights. Each distribution reflects the plausibility of the treatment effects on cognitive functioning and fatigue after observing fictive data.} \label{fig:efficiency} \end{figure} In the current paper, we propose a decision procedure for multivariate decision-making with multiple (correlated) binary outcomes. The procedure consists of two components. First, we model the data with a multivariate Bernoulli distribution, which is a multivariate generalization of the univariate Bernoulli distribution. The model is exact and does not rely on numerical approximations, making it appropriate for small samples. Second, we extend multivariate analysis with a compensatory decision rule to include more comprehensive and flexible definitions of superiority. The decision procedure is based on a Bayesian multivariate Bernoulli model with a conjugate prior distribution. The motivation for this model is twofold. First, the multivariate Bernoulli model is a natural generalization of the univariate Bernoulli model, which intuitively parametrizes success probabilities per outcome variable. Second, a conjugate prior distribution can greatly facilitate computational procedures for inference. Conjugacy ensures that the form of the posterior distribution is known, making sampling from the posterior distribution straightforward. Although Bayesian analysis is well-known to allow for inclusion of information external to the trial by means of prior information \cite{Gelman2013}, researchers who wish not to include prior information can obtain results similar to frequentist analysis. The use of a non-informative prior distribution essentially results in a decision based on the likelihood of the data, such that 1) Bayesian and frequentist (point) estimates are equivalent; and 2) the frequentist p-value equals the Bayesian posterior probability of the null hypothesis in one-sided testing \cite{Marsman2017}. Since a (combined) p-value may be difficult to compute for the multivariate Bernoulli model, Bayesian computational procedures can exploit this equivalence and facilitate computations involved in Type I error control \cite{FDA2010,Wilson2019}. The remainder of the paper is structured as follows. In the next section, we present a multivariate approach to the analysis of multiple binary outcomes. Subsequently, we discuss various decision rules to evaluate treatment differences on multiple outcomes. The framework is evaluated in the \textit{\nameref{sec:evaluation}} section, and we discuss limitations and extensions in the \textit{\nameref{sec:discussion}}. \section{A model for multivariate analysis of multiple binary outcomes}\label{sec:analysis} \subsection{Notation} We start the introduction of our framework with some notation. The joint response for patient $i$ in treatment $j$ on $K$ outcomes will be denoted by $\bm{x}_{j,i}=(x_{j,i,1}, \dots, x_{j,i,K})$, where $i \in \{1,\dots, n_j\}$, and $j \in \{E,C\}$ (i.e., Experimental and Control). The response on outcome $k$ $x_{j,i,k} \in \{0,1\}$ ($0=$ failure, $1=$ success), such that $\bm{x}_{j,i}$ can take on $Q=2^{K}$ different combinations $\{1 \dots 11\}, \{1 \dots 10\}, \dots, \{0 \dots 01\}, \{0 \dots 00\}$. The observed frequencies of each possible response combination for treatment $j$ in a dataset of $n_{j}$ patients are denoted by vector $\bm{s}_{j}$ of length $Q$. The elements of $\bm{s}_{j}$ add up to $n_{j}$, $\sum_{q=1}^{Q} \bm{s}_{j,q}=n_{j}$. Vector $\bm{\theta}_{j}=(\theta_{j,1},\dots,\theta_{j,K})$ reflects success probabilities of $K$ outcomes for treatment $j$ in the population. Vector $\bm{\delta}=(\delta_{1},\dots,\delta_{K})$ then denotes the treatment differences on $K$ outcomes, where $\delta_{k}=\theta_{E,k}-\theta_{C,k}$. We use $\bm{\phi}_{j}=\phi_{j,1 \dots 11}, \phi_{j,1 \dots 10}, \dots, \phi_{j,0 \dots 01}, \phi_{j,0 \dots 00}$ to refer to probabilities of joint responses in the population, where $\phi_{j,q}$ denotes the probability of joint response combination $\bm{x}_{j,i}$ with configuration $q$. Vector $\bm{\phi}_{j}$ has $Q$ elements, and sums to unity, $\sum_{q=1}^{Q} \bm{\phi}_{j,q}=1$. Information about the relation between outcomes $k$ and $l$ is reflected by $\phi_{j,kl}$, which is defined as the sum of those elements of $\bm{\phi}_{j}$ that have the $k^{th}$ and $l^{th}$ elements of $q$ equal to $1$, e.g. $\phi_{j,11}$ for $K=2$. Similarly, marginal probability $\theta_{j,k}$ follows from summing all elements of $\bm{\phi}_{j}$ with the $k^{th}$ element of $q$ equal to $1$. For example, with three outcomes, the success probability of the first outcome is equal to $\theta_{j,1} = \phi_{j,111} + \phi_{j,110} + \phi_{j,101} + \phi_{j,100}$. \subsection{Likelihood} The likelihood of joint response $\bm{x}_{j,i}$ follows a $K$-variate Bernoulli distribution \cite{Dai2013}: % \begin{flalign}\label{eq:mvbern} p(\bm{x}_{j,i}|\bm{\phi}_{j})=& \text{ multivariate Bernoulli}(\bm{x}_{j,i}|\bm{\phi}_{j})&&\\\nonumber =& \phi_{j,1 \dots 11}^{x_{j,1} \times \dots \times x_{j,K}} \phi_{j,1 \dots 10}^{x_{j,1} \times \dots \times x_{j,K-1} (1-x_{j,K})} \times \dots \times&&\\\nonumber &\phi_{j,0\dots 01}^{(1-x_{j,1})\times \dots \times (1-x_{j,K-1})x_{j,K}} \phi_{j,0\dots 00}^{(1-x_{j,1} \times \dots \times 1-x_{j,K})}.% && \end{flalign} % % \noindent The multivariate Bernoulli distribution in Equation \ref{eq:mvbern} is a specific parametrization of the multinomial distribution. The likelihood of $n_{j}$ joint responses summarized by cell frequencies in $\bm{s}_{j}$ follows a $Q$-variate multinomial distribution with parameters $\bm{\phi}_{j}$:% % % \begin{flalign}\label{eq:mvmult} p(\bm{s}_{j}|\bm{\phi}_{j})=& \text{ multinomial}(\bm{s}_{j}|\bm{\phi}_{j})&&\\\nonumber \propto & \phi_{j,1\dots 11}^{s_{j,1\dots 11}} \phi_{j,1\dots 10}^{s_{j,1\dots 10}} \times \dots \times \phi_{j,0\dots 01}^{s_{j,0\dots 01}} \phi_{j,0\dots 00}^{s_{j,0\dots 00}}.% && \end{flalign} Conveniently, the multivariate Bernoulli distribution is consistent under marginalization. That is, marginalizing a $K-$variate Bernoulli distribution with respect to $p$ variables results in a ($K-p$)-variate Bernoulli distribution \cite{Dai2013}. Hence, the univariate Bernoulli distribution is directly related and results from marginalizing ($K-1$) variables. The pairwise correlation between variables $x_{j,k}$ and $x_{j,l}$ is reflected by $\rho_{x_{j,k},x_{j,l}}$ \cite{Dai2013}: % % \begin{flalign}\label{eq:rho_bibern} \rho_{x_{j,k}x_{j,l}}=& \frac{\theta_{j,kl}-\theta_{j,k}\theta_{j,l}} {\sqrt{\theta_{j,k}(1-\theta_{j,k})\theta_{j,l}(1-\theta_{j,l})}}.% && \end{flalign} % % \noindent This correlation is over the full range, i.e. $-1 \leq \rho_{x_{j,k},x_{j,l}} \leq 1$ \cite{Olkin2015}. \subsection{Prior and posterior distribution} A natural choice to model prior information about response probabilities $\bm{\phi}_{j}$ is the Dirichlet distribution, since a Dirichlet prior and multinomial likelihood form a conjugate combination. The $Q$-variate prior Dirichlet distribution has hyperparameters % $\bm{\alpha}_{j}^{0}=( \alpha_{j,11 \dots 11}^{0}, \alpha_{j,11 \dots 10}^{0}, \dots, \alpha_{j,00 \dots 01}^{0}, \alpha_{j,00 \dots 00}^{0} )$: % % \begin{flalign}\label{eq:dirichlet_prior} p(\bm{\phi}_{j})=& \text{ Dirichlet}(\bm{\phi}_{j}|\bm{\alpha}^{0}_{j})&&\\\nonumber \propto& \phi_{j,1\dots 11}^{\alpha^{0}_{j,1\dots 11}-1} \phi_{j,1\dots 10}^{\alpha^{0}_{j,1\dots 10}-1} \times \dots \times \phi_{j,0\dots 01}^{\alpha^{0}_{j,0\dots 01}-1} \phi_{j,0\dots 00}^{\alpha^{0}_{j,0\dots 00}-1},% && \end{flalign} % % \noindent where each of the prior hyperparameters $\bm{\alpha}^{0}_{j}$ should be larger than zero to ensure a proper prior distribution. The posterior distribution of $\bm{\phi}_{j}$ results from multiplying the likelihood and the prior distribution and follows a Dirichlet distribution with parameters $\bm{\alpha}^{n}_{j}=\bm{\alpha}^{0}_{j}+\bm{s}_{j}$: % % \begin{flalign}\label{eq:dirichlet_posterior} p(\bm{\phi}_{j}|\bm{s}_{j}) = & \text{Dirichlet}(\bm{\phi}_{j}|\bm{\alpha}^{0}_{j} + \bm{s}_{j})&& \\\nonumber \propto & \phi_{j,1 \dots 11}^{s_{j,1 \dots 11}} \phi_{j,1 \dots 10}^{s_{j,1 \dots 10}} \times \dots \times \phi_{j,0 \dots 01}^{s_{j,0 \dots 01}} \phi_{j,0 \dots 00}^{s_{j,0 \dots 00}} \times &&\\\nonumber % & \phi_{j,1 \dots 11}^{\alpha^{0}_{j,1 \dots 11}-1} \phi_{j,1 \dots 10}^{\alpha^{0}_{j,1 \dots 10}-1} \times \dots \times \phi_{j,0 \dots 01}^{\alpha^{0}_{j,0 \dots 01}-1} \phi_{j,0 \dots 00}^{\alpha^{0}_{j,0 \dots 00}-1} &&\\\nonumber % \propto& \phi_{j,1 \dots 11}^{\alpha^{n}_{j,1 \dots 11}-1} \phi_{j,1 \dots 10}^{\alpha^{n}_{j,1 \dots 10}-1} \times \dots \times \phi_{j,0 \dots 01}^{\alpha^{n}_{j,0 \dots 01}-1} \phi_{j,0 \dots 00}^{\alpha^{n}_{j,0 \dots 00}-1}. && \end{flalign} Since prior hyperparameters $\bm{\alpha}^{0}_{j}$ impact the posterior distribution of treatment difference $\bm{\delta}$, specifying them carefully is important. Each of the hyperparameters contains information about one of the observed frequencies $\bm{s}_{j}$ and can be considered a prior frequency that reflects the strength of prior beliefs. Equation \ref{eq:dirichlet_posterior} shows that the influence of prior information depends on prior frequencies $\bm{\alpha}^{0}_{j}$ relative to observed frequencies $\bm{s}_{j}$. When all elements of $\bm{\alpha}^{0}_{j}$ are set to zero, $\bm{\alpha}^{n}_{j} = \bm{s}_{j}$. This (improper) prior specification results in a posterior mean of $\phi_{j,q}|s_{j,q} = \frac{\alpha^{n}_{j,q}}{\sum_{p=1}^{Q} \alpha^{n}_{j,p}}$, which is equivalent to the frequentist maximum likelihood estimate of $\phi_{j,q} = \frac{s_{j,q}}{\sum_{p=1}^{Q} s_{j,p}}$. To take advantage of this property with a proper non-informative prior, one could specify hyperparameters slightly larger than zero such that the posterior distribution is essentially completely based on the information in the data (i.e. $\bm{\alpha}^{n}_{j} \approx \bm{s}_{j}$). To include prior information - when available - in the decision, $\bm{\alpha}^{0}_{j}$ can be set to specific prior frequencies to increase the influence on the decision. These prior frequencies may for example be based on results from related historical trials. We provide more technical details on prior specification in Appendix \textit{\nameref{app:prior}}. There we also highlight the relation between the Dirichlet distribution and the multivariate beta distribution, and demonstrate that the prior and posterior distributions of $\bm{\theta}_{j}$ are multivariate beta distributions. The final superiority decision relies on the posterior distribution of treatment difference $\bm{\delta}$. Although this distribution does not belong to a known family of distributions, we can approach the distribution of $\bm{\delta}$ via a two-step transformation of the posterior samples of $\bm{\phi}_{j}$. First, a sample of $\bm{\phi}_{j}$ is drawn from its known Dirichlet distribution. Next, these draws can be transformed to a sample of $\bm{\theta}_{j}$ using the property that joint response frequencies sum to the marginal probabilities. Finally, these samples from the posterior distributions of $\bm{\theta}_{E}$ and $\bm{\theta}_{C}$ can then be transformed to obtain the posterior distribution of joint treatment difference $\bm{\delta}$, by subtracting draws of $\bm{\theta}_{C}$ from draws of $\bm{\theta}_{E}$, i.e. $\bm{\delta}=\bm{\theta}_{E}-\bm{\theta}_{C}$. Algorithm \ref{alg:fixed} in Subsection \nameref{sec_sub:implementation} includes pseudocode with the steps required to obtain a sample from the posterior distribution of $\bm{\delta}$. \section{Decision rules for multiple binary outcomes}\label{sec:decision} The current section discusses how the model from the previous section can be used to make treatment superiority decisions. Treatment superiority is defined by the posterior mass in a specific subset of the multivariate parameter space of $\bm{\delta}=(\delta_{1},\dots, \delta_{K})$. The complete parameter space will be denoted by $\mathcal{S}\subset (-1,1)^{K}$, and the superiority space will be denoted by $\mathcal{S}_{Sup}\subset S$. Superiority is concluded when a sufficiently large part of the posterior distribution of $\bm{\delta}$ falls in superiority region $\mathcal{S}_{Sup}$: % % \begin{flalign}\label{eq:criterion} P(\bm{\delta}\in \mathcal{S}_{sup}|\bm{s}_{E},\bm{s}_{C})>p_{cut} \end{flalign}% % % \noindent where $p_{cut}$ reflects the decision threshold to conclude superiority. The value of this threshold should be chosen to control the Type I error rate $\alpha$. \subsection{Four different decision rules} \begin{figure}[htbp] \centering \begin{subfigure}[c]{0.35\linewidth} \includegraphics[width=\linewidth,keepaspectratio]{Figure2a_Single-eps-converted-to-1.jpg} \caption{Single (outcome $1$)}\label{fig:sup_single} \end{subfigure} \begin{subfigure}[c]{0.35\linewidth} \includegraphics[width=\linewidth,keepaspectratio]{Figure2b_Any-eps-converted-to-1.jpg} \caption{Any}\label{fig:sup_any} \end{subfigure} % \begin{subfigure}[c]{0.35\linewidth} \includegraphics[width=\linewidth,keepaspectratio]{Figure2c_All-eps-converted-to-1.jpg} \caption{All}\label{fig:sup_all} \end{subfigure} % \begin{subfigure}[c]{0.35\linewidth} \includegraphics[width=\linewidth,keepaspectratio]{Figure2d_Compensatory-eps-converted-to-1.jpg} \caption{Compensatory}\label{fig:sup_compensatory} \end{subfigure} \caption{Superiority regions of various decision rules for two outcome variables ($K=2$). The Any rule is a combination of the two Single rules. The Compensatory rule reflects $\bm{w}=(0.5,0.5)$.} \label{fig:superiority} \end{figure} Different partitions of the parameter space define different superiority criteria to distinguish two treatments. The following decision rules conclude superiority when there is sufficient evidence that: % % \begin{enumerate} \item \textit{Single rule:} an a priori specified primary outcome $k$ has a treatment difference larger than zero. The superiority region is denoted by: % % \begin{flalign} \mathcal{S}_{Single (k)}=\{\bm{\delta}| \delta_k>0 \}. \end{flalign} % % \noindent Superiority is concluded when \begin{flalign} P(\bm{\delta} \in \mathcal{S}_{Single (k)}|\bm{s}_{E},\bm{s}_{C}) > p_{cut}. \end{flalign} % \item \textit{Any rule:} at least one of the outcomes has a treatment difference larger than zero. The superiority region is a combination of $K$ superiority regions of the Single rule: % % \begin{flalign} \mathcal{S}_{Any} = & \{\mathcal{S}_{Single_{1}} \cup \dots \cup \mathcal{S}_{Single_{K}}\}. \nonumber \end{flalign} % % \noindent Superiority is concluded when \begin{flalign} \max_{k} P(\bm{\delta} \in \mathcal{S}_{Single (k)}|\bm{s}_{E},\bm{s}_{C}) > p_{cut}. \end{flalign} % \item \textit{All rule:} all outcomes have a treatment difference larger than zero. Similar to the Any rule, the superiority region is a combination of $K$ superiority regions of the Single rule: The superiority region is denoted by: \begin{flalign} \mathcal{S}_{All}= & \{\mathcal{S}_{Single_{1}} \cap \dots \cap \mathcal{S}_{Single_{K}}\}.\nonumber \end{flalign} % % \noindent Superiority is concluded when \begin{flalign} \min_{k} P(\bm{\delta} \in \mathcal{S}_{Single (k)}|\bm{s}_{E},\bm{s}_{C}) > p_{cut}. \end{flalign} \end{enumerate} \noindent Next to facilitating these common decision rules, our framework allows for a Compensatory decision rule: % \begin{enumerate} \setcounter{enumi}{3} \item \textit{Compensatory rule:} the weighted sum of treatment differences is larger than zero. The superiority region is denoted by: % % \begin{flalign} \mathcal{S}_{Compensatory}(\bm{w})=\{\bm{\delta}| \sum_{k=1}^{K} w_{k}\delta_{k}>0\} \end{flalign} \begin{conditions} \bm{w} & $=(w_{1},\dots,w_{K})$ reflect the weights for outcomes $1,\dots,K$,\\ 0 & $\leq w_k \leq 1$ and $\sum_{k=1}^K w_{k}=1$.\\ \end{conditions} % % \noindent Superiority is then concluded when: \begin{flalign} P(\bm{\delta} \in \mathcal{S}_{Compensatory}(\bm{w})|\bm{s}_{E},\bm{s}_{C}) > p_{cut}. \end{flalign} \end{enumerate} % \noindent Figure \ref{fig:superiority} visualizes these four decision rules. From our discussion of the different decision rules, a number of relationships between them can be identified. First, mathematically the Single rule can be considered a special case of the Compensatory rule with weight $w_{k}=1$ for primary outcome $k$ and $w_{l}=0$ for all other outcomes. Second, the superiority region of the All rule is a subset of the superiority regions of the other rules, i.e. \begin{flalign} \mathcal{S}_{All} \subset \mathcal{S}_{Single}, \mathcal{S}_{Compensatory}, \mathcal{S}_{Any}. \end{flalign} The Single rule is in turn a subset of the superiority region of the Any rule, such that % % \begin{flalign} \mathcal{S}_{Single} \subset \mathcal{S}_{Any}. \end{flalign} % % These properties can be observed in Figure \ref{fig:superiority} and translate directly to the amount of evidence provided by data $\bm{s}_{E}$ and $\bm{s}_{C}$. The posterior probability of the All rule is always smallest, while the posterior probability of the Any rule is at least as large as the posterior probability of the Single rule: % % \begin{flalign}\label{eq:compare_psup} P(\mathcal{S}_{Any}|\bm{s}_{E},\bm{s}_{C}) \geq P(\mathcal{S}_{Single}|\bm{s}_{E},\bm{s}_{C}) > P(\mathcal{S}_{All}|\bm{s}_{E},\bm{s}_{C})&&&\\\nonumber P(\mathcal{S}_{Compensatory}|\bm{s}_{E},\bm{s}_{C}) > P(\mathcal{S}_{All}|\bm{s}_{E},\bm{s}_{C}).&&& \end{flalign} % % \noindent The ordering of the posterior probabilities of different decision rules (Equation \ref{eq:compare_psup}) implies that superiority decisions are most conservative under the All rule and most liberal under the Any rule. In practice, this difference has two consequences. First, to properly control Type I error probabilities for these different decision rules, one needs to set a larger decision threshold $p_{cut}$ for the Any rule than for the All rule. Second, the All rule typically requires the largest sample size to obtain sufficient evidence for a superiority decision. Additionally, the correlation between treatment differences, $\rho_{\delta_{k},\delta_{l}}$, influences the posterior probability to conclude superiority. The correlation influences the overlap with the superiority region, as visualized in Figure \ref{fig:correlation}. Consequently, the Single rule is not sensitive to the correlation. A negative correlation requires a smaller sample size than a positive correlation under the Any and Compensatory rules, and vice versa for the All rule. % % \begin{figure}[htbp] \centering \includegraphics[width=\textwidth,keepaspectratio]{Fig_correlation-eps-converted-to-1.jpg}\\ \caption{Influence of the correlation between two treatment differences on the proportion of overlap between the bivariate distribution of treatment differences $\bm{\delta}$ and the superiority regions.} \label{fig:correlation} \end{figure} \subsection{Specification of weights of the Compensatory decision rule}\label{sec:sub_weights} To utilize the flexibility of the Compensatory rule, researchers may wish to specify weights $\bm{w}$. The current subsection discusses two ways to choose these weights: Specification can be based on the impact of outcome variables or on efficiency of the decision. Specification of impact weights is guided by substantive considerations to reflect the relative importance of outcomes. When $\bm{w}=(\frac{1}{K},\dots,\frac{1}{K})$, all outcomes are equally important and all success probabilities in $\bm{\theta}_{j}$ exert an identical influence on the weighted success probability. Any other specification of $\bm{w}$ that satisfies $\sum_{k=1}^{K}w_{k}=1$ implies unequal importance of outcomes. To make the implications of importance weight specification more concrete, let us reconsider the two potential side effects of brain cancer treatment in the CAR-B study: cognitive functioning and fatigue \cite{schimmel2018}. When setting $(w_{cognition},w_{fatigue})=(0.50,0.50)$, both outcomes would be considered equally important and a decrease of (say) $0.10$ in fatigue could be compensated by an increase on cognitive functioning of at least $0.10$. When $w_{cognition}>0.50$, cognitive functioning is more influential than fatigue; and vice versa when $w_{cognition}<0.50$. If $w_{cognition}=0.75$ and $w_{fatigue}=0.25$ for example, the treatment difference of cognitive functioning has three times as much impact on the decision as the treatment difference of fatigue. Efficiency weights are specified with the aim of optimizing the required sample size. As the weights directly affect the amount of evidence for a treatment difference, the efficiency of the Compensatory decision rule can be optimized with values of $\bm{w}$ that are a priori expected to maximize the probability of falling in the superiority region. This strategy could be used when efficiency is of major concern, while researchers do not have a strong preference for the substantive priority of specific outcomes. The technical details required to find efficient weights are presented in Appendix \textit{\nameref{app:weights}}. \subsection{Implementation of the framework}\label{sec_sub:implementation} The procedure to arrive at a decision using the multivariate analysis procedure proposed in the previous sections is presented in Algorithm \ref{alg:fixed} for a design with fixed sample size $n_{j}$ of treatment $j$. We present the algorithm for designs with interim analyses in Algorithm \ref{alg:interim} in Appendix \textit{\nameref{app:implementation_adaptive}}. \begin{algorithm} \caption{Decision procedure for a fixed design} \label{alg:fixed} \begin{tabular}{p{\textwidth}} \begin{enumerate}[label={\arabic*},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item \underline{\textbf{Initialize}} \begin{enumerate}[label={},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item \begin{enumerate}[label={\alph*},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item Choose decision rule \begin{enumerate}[label={},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item \textbf{if} Compensatory \textbf{then} specify weights $\bm{w}$ \item \textbf{if} Single \textbf{then} specify $k$ \item \textbf{end if} \end{enumerate} \end{enumerate} \item \textbf{for} each treatment $j \in \{E,C\}$ \textbf{do} \begin{enumerate}[label={\alph*},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \setcounter{enumiii}{1} \item Choose prior hyperparameters $\bm{\alpha}^{0}_{j}$ \end{enumerate} \item \textbf{end for} \begin{enumerate}[label={\alph*},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \setcounter{enumiii}{2} \item Choose Type I error rate $\alpha$ and power $1-\beta$ \item Determine decision threshold $p_{cut}$ \begin{enumerate}[label={},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item \textbf{if} Any rule \textbf{then} $1- \frac{1}{2} \alpha$ \item \textbf{else} $1-\alpha$ \item \textbf{end if} \end{enumerate} \item Determine sample size $n_{j}$ based on anticipated treatment differences $\bm{\delta}^{n}$ \end{enumerate} \end{enumerate} \item \underline{\textbf{Collect data and compute evidence} \begin{enumerate}[label={},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item \textbf{for} each treatment $j \in \{E,C\}$ \begin{enumerate}[label={\alph*},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item Collect $n_{j}$ joint responses $\bm{x}_{j,i}$ \item Compute joint response frequencies $\bm{s}_{j}$ \item Compute posterior parameters $\bm{\alpha}^{n}_{j} = \bm{s}_{j} + \bm{\alpha}^{0}_{j}$ \item Sample $L$ posterior draws, $\bm{\phi}^{l}_{j}$, $\bm{\phi}_{j}|\bm{\alpha}^{n}_{j} \sim Dirichlet(\bm{\phi}_{j}|\bm{\alpha}^{n}_{j})$ \item Sum draws $\bm{\phi}^{l}_{j}$ to $\bm{\theta}^{l}_{j}$ \end{enumerate} \item \textbf{end for} \begin{enumerate}[label={\alph*},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \setcounter{enumiii}{5} \item Transform draws $\bm{\theta}^{l}_{j}$ to $\bm{\delta}^{l}$ via $\delta^{l}_{k} = \theta^{l}_{E,k} - \theta^{l}_{C,k}$ \item Compute posterior probability of treatment superiority $P(\bm{\delta} \in \mathcal{S}_{Sup}|\bm{s}_{E},\bm{s}_{C})$ as the proportion of posterior draws in superiority region $\mathcal{S}_{Sup}$ \end{enumerate} \end{enumerate} \item \underline{\textbf{Make final decision}} \begin{enumerate}[label={},leftmargin=7pt,topsep=0pt,parsep=0pt,labelindent=0pt,itemindent=0pt,listparindent=-5pt] \item \textbf{if} $P(\bm{\delta} \in \mathcal{S}_{Sup}|\bm{s}_{E},\bm{s}_{C}) > P_{cut}$ \textbf{then} conclude superiority \item \textbf{else} conclude non-superiority \item \textbf{end if} \end{enumerate} \end{enumerate}\\ \end{tabular} \end{algorithm} \section{Numerical evaluation}\label{sec:evaluation} The current section evaluates the performance of the presented multivariate decision framework by means of simulation in the context of two outcomes ($K=2$). We seek to demonstrate 1) how often the decision procedure results in an (in)correct superiority conclusion to learn about decision error rates; 2) how many observations are required to conclude superiority with satisfactory error rates to investigate the efficiency of different decision rules, and 3) how well the average estimated treatment difference corresponds to the true treatment difference to examine bias. The current section is structured as follows. We first introduce the simulation conditions, the procedure to compute sample sizes for each of these conditions, and the procedure to generate and evaluate data. We then discuss the results of the simulation. \paragraph{Conditions} The performance of the framework is examined as a function of the following factors: \begin{enumerate} % \item \textit{Data generating mechanisms:} We generated data of eight treatment difference combinations $\bm{\delta}^{T}$ and three correlations between outcomes $\rho_{\theta_{j,1},\theta_{j,2}}$. An overview of these $8 \times 3 = 24$ data generating mechanisms is given in Table \ref{tab:conditions}. In the remainder of this section, we refer to these data generating mechanisms with numbered combinations (e.g. $1.2$), where the first number reflects treatment difference $\bm{\delta}^{T}$ and the second number refers to correlation $\rho_{\theta_{j,1}, \theta_{j,2}}^{T}$. \item \textit{Decision rules:} The generated data were evaluated with six different decision rules. We used the Single (for outcome $k=1$), Any, and All rules, as well as three different Compensatory rules: One with equal weights $\bm{w}=(0.50,0.50)$ and two with unequal weights $\bm{w}=(0.76,0.24)$ and $\bm{w}=(0.64,0.36)$. The weight combinations of the latter two Compensatory rules optimize the efficiency of data generating mechanisms with uncorrelated (i.e. $8.2$) and correlated (i.e. $8.1$) treatment differences respectively, following the procedure in Appendix \textit{\nameref{app:weights}}. We refer to these three Compensatory rules as Compensatory-Equal (C-E), Compensatory-Unequal Uncorrelated (C-UU) and Compensatory-Unequal Correlated (C-UC) respectively. \end{enumerate} \begin{table}[htbp] \small\sf\centering \caption{Data generating mechanisms (DGM) used in numerical evaluation of the framework.} \label{tab:conditions} \begin{tabular}{lrrrrrrrrr} \toprule DGM & $\delta_{1}^{T}$ & $\delta_{2}^{T}$ & $\rho_{\theta_{j,1}, \theta_{j,2}}^{T}$ & $\theta_{E,1}^{T}$ & $\theta_{E,2}^{T}$ & $\phi_{E,11}^{T}$ & $\theta_{C,1}^{T}$ & $\theta_{C,2}^{T}$ & $\phi_{C,11}^{T}$ \\ \midrule 1.1 & -0.20 & -0.20 & -0.30 & 0.40 & 0.40 & 0.09 & 0.60 & 0.60 & 0.29 \\ 1.2 & & & 0.00 & & & 0.16 & & & 0.36 \\ 1.3 & & & 0.30 & & & 0.23 & & & 0.43 \\ \multicolumn{10}{c}{ }\\ 2.1 & 0.00 & 0.00 & -0.30 & 0.50 & 0.50 & 0.17 & 0.50 & 0.50 & 0.17 \\ 2.2 & & & 0.00 & & & 0.25 & & & 0.25 \\ 2.3 & & & 0.30 & & & 0.32 & & & 0.32 \\ \multicolumn{10}{c}{ }\\ 3.1 & 0.10 & 0.10 & -0.30 & 0.55 & 0.55 & 0.23 & 0.45 & 0.45 & 0.13 \\ 3.2 & & & 0.00 & & & 0.30 & & & 0.20 \\ 3.3 & & & 0.30 & & & 0.38 & & & 0.28 \\ \multicolumn{10}{c}{ }\\ 4.1 & 0.20 & 0.20 & -0.30 & 0.60 & 0.60 & 0.29 & 0.40 & 0.40 & 0.09 \\ 4.2 & & & 0.00 & & & 0.36 & & & 0.16 \\ 4.3 & & & 0.30 & & & 0.43 & & & 0.23 \\ \multicolumn{10}{c}{ }\\ 5.1 & 0.40 & 0.40 & -0.30 & 0.70 & 0.70 & 0.43 & 0.30 & 0.30 & 0.03 \\ 5.2 & & & 0.00 & & & 0.49 & & & 0.09 \\ 5.3 & & & 0.30 & & & 0.55 & & & 0.15 \\ \multicolumn{10}{c}{ }\\ 6.1 & 0.40 & 0.00 & -0.30 & 0.70 & 0.50 & 0.28 & 0.30 & 0.50 & 0.08 \\ 6.2 & & & 0.00 & & & 0.35 & & & 0.15 \\ 6.3 & & & 0.30 & & & 0.42 & & & 0.22 \\ \multicolumn{10}{c}{ }\\ 7.1 & 0.20 & -0.40 & -0.30 & 0.60 & 0.30 & 0.11 & 0.40 & 0.70 & 0.21 \\ 7.2 & & & 0.00 & & & 0.18 & & & 0.28 \\ 7.3 & & & 0.30 & & & 0.25 & & & 0.35 \\ \multicolumn{10}{c}{ }\\ 8.1 & 0.24 & 0.08 & -0.30 & 0.62 & 0.54 & 0.26 & 0.38 & 0.46 & 0.10 \\ 8.2 & & & 0.00 & & & 0.33 & & & 0.17 \\ 8.3 & & & 0.30 & & & 0.41 & & & 0.25 \\ \bottomrule \end{tabular} \end{table} \paragraph{Sample size computations}\label{sec:compute_n} To properly control Type I error and power, each of the $24 \times 6$ conditions requires a specific sample size. These sample sizes $n_{j}$ are based on anticipated treatment differences $\bm{\delta}^{n}$, that corresponded to the true parameters of each data generating mechanism in Table \ref{tab:conditions} (i.e. $\bm{\delta}^{n} = \bm{\delta}^{T}$ and $\rho_{\theta_{j,1},\theta_{j,2}}^{n}=\rho_{\theta_{j,1},\theta_{j,2}}^{T}$). Procedures to compute sample sizes per treatment group for the different decision rules were the following: \begin{enumerate} \item For the Single rule, we used a two-proportion $z-$test, where we plugged in the anticipated treatment difference on the first outcome variable (i.e $\delta_{1}^{n}$). \item Following Sozu et al. \cite{Sozu2010,Sozu2016} we used multivariate normal approximations of correlated binary outcomes for the All and Any rules. \item For the Compensatory rule, we used a continuous normal approximation with mean $\sum_{k=1}^{K} w_{k} \theta_{j,k}$ and variance $\sum_{k=1}^{K} w^{2}_{k}\sigma^{2}_{j,k}+2\mathop{\sum\sum}\limits_{k < l} w_{k} w_{l} \sigma_{j,kl}$. Here, $\sigma^{2}_{j,k} = \theta_{j,k} (1 - \theta_{j,k})$ and $\sigma_{j,kl} = \phi_{j,kl} - \theta_{j,k} \theta_{j,l}$. \end{enumerate} The computed sample sizes are presented in Table \ref{tab:CompareRules_nStop}. Conditions that should not result in superiority were evaluated at sample size $n_{j}=1,000$. \paragraph{Data generation and evaluation} Of each data generating mechanism presented in Table \ref{tab:conditions}, we generated $5,000$ samples of size $2 \times n_{j}$. These data were combined with a proper uninformative prior distribution with hyperparameters $\bm{\alpha}^{0}_{j}=(0.01,\dots,0.01)$ to satisfy $\bm{\alpha}^{n}_{j} \approx \bm{s}_{j}$, as discussed in Section \nameref{sec:analysis}. We aimed for Type I error rate $\alpha=.05$ and power $1-\beta=.80$, which corresponds to a decision threshold $p_{cut}$ of $1-\alpha=0.95$ (Single, Compensatory, All rules) and $1-\frac{1}{2}\alpha=0.975$ (Any rule) \cite{Sozu2012,Sozu2016,Marsman2017}. The generated datasets were evaluated using the procedure in steps $2$ and $3$ of Algorithm \ref{alg:fixed}. The proportion of samples that concluded superiority reflects Type I error rates (when false) and power (when correct). We assessed the Type I error rate under the data generating mechanism with the least favorable population values of $\bm{\delta}^{T}$ under the null hypothesis in frequentist one-sided significance testing. These are values of $\bm{\delta}^{T}$ outside $\mathcal{S}_{Sup}$ that are most difficult to distinguish from values of $\bm{\delta}^{T}$ inside $\mathcal{S}_{Sup}$. Adequate Type I error rates for the least favorable treatment differences imply that the Type I error rates of all values of $\bm{\delta}^{T}$ outside $\mathcal{S}_{Sup}$ are properly controlled. The least favorable values of $\bm{\delta}^{T}$ were reflected by treatment difference $2$ for the Single, Any, and Compensatory rules, and treatment difference $6$ for the All rule. Bias was computed as the difference between the observed treatment difference at sample size $n_{j}$ and the true treatment difference $\bm{\delta}^{T}$. \subsection{Results} \begin{table}[htbp] \centering \caption{P(Conclude superiority) for different data generating mechanisms (DGM) and decision rules. Bold-faced values indicate the conditions with least favorable values.} \label{tab:CompareRules_pSup} \begin{tabular}{lrrrrrr} \toprule DGM & \multicolumn{1}{l}{Single} & \multicolumn{1}{l}{Any} & \multicolumn{1}{l}{All} & \multicolumn{1}{l}{C-E} & \multicolumn{1}{l}{C-UU}& \multicolumn{1}{l}{C-UC}\\ \midrule 1.1 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ 1.2 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ 1.3 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ \multicolumn{7}{c}{ }\\ 2.1 & \textbf{0.051} & \textbf{0.048} & 0.000 & 0.049 & 0.052 & 0.051 \\ 2.2 & 0.046 & 0.045 & 0.003 & 0.056 & 0.048 & 0.054 \\ 2.3 & 0.051 & 0.045 & 0.008 & \textbf{0.049} & \textbf{0.049} & \textbf{0.049} \\ \multicolumn{7}{c}{ }\\ 3.1 & 0.810 & 0.796 & 0.801 & 0.807 & 0.804 & 0.790 \\ 3.2 & 0.799 & 0.801 & 0.804 & 0.806 & 0.788 & 0.791 \\ 3.3 & 0.799 & 0.807 & 0.809 & 0.800 & 0.797 & 0.803 \\ \multicolumn{7}{c}{ }\\ 4.1 & 0.794 & 0.784 & 0.806 & 0.811 & 0.789 & 0.784 \\ 4.2 & 0.808 & 0.802 & 0.814 & 0.813 & 0.804 & 0.803 \\ 4.3 & 0.804 & 0.801 & 0.816 & 0.804 & 0.796 & 0.800 \\ \multicolumn{7}{c}{ }\\ 5.1 & 0.807 & 0.806 & 0.830 & 0.881 & 0.817 & 0.857 \\ 5.2 & 0.807 & 0.814 & 0.838 & 0.831 & 0.813 & 0.813 \\ 5.3 & 0.809 & 0.847 & 0.822 & 0.809 & 0.798 & 0.802 \\ \multicolumn{7}{c}{ }\\ 6.1 & 0.811 & 0.779 & 0.053 & 0.824 & 0.798 & 0.819 \\ 6.2 & 0.813 & 0.777 & 0.045 & 0.805 & 0.808 & 0.820 \\ 6.3 & 0.803 & 0.758 & \textbf{0.051} & 0.801 & 0.788 & 0.803 \\ \multicolumn{7}{c}{ }\\ 7.1 & 0.799 & 0.789 & 0.000 & 0.000 & 0.863 & 0.002 \\ 7.2 & 0.804 & 0.792 & 0.000 & 0.000 & 0.857 & 0.003 \\ 7.3 & 0.807 & 0.794 & 0.000 & 0.000 & 0.867 & 0.005 \\ \multicolumn{7}{c}{ }\\ 8.1 & 0.787 & 0.782 & 0.789 & 0.808 & 0.804 & 0.805 \\ 8.2 & 0.777 & 0.797 & 0.807 & 0.804 & 0.799 & 0.804 \\ 8.3 & 0.785 & 0.811 & 0.807 & 0.805 & 0.805 & 0.806 \\ \bottomrule \end{tabular} \end{table} \begin{table}[htbp] \small\sf\centering \caption{Average sample size to correctly conclude superiority for different data generating mechanisms (DGM) and decision rules. Bold-faced values indicate the lowest sample size per data generating mechanism. Conditions with a hyphen should not result in treatment superiority.} \label{tab:CompareRules_nStop} \begin{tabular}{lrrrrrr} \toprule DGM & \multicolumn{1}{l}{Single} & \multicolumn{1}{l}{Any} & \multicolumn{1}{l}{All} & \multicolumn{1}{l}{C-E} & \multicolumn{1}{l}{C-UU}& \multicolumn{1}{l}{C-UC}\\ \midrule 1.1 & - & - & - & - & - & - \\ 1.2 & - & - & - & - & - & - \\ 1.3 & - & - & - & - & - & - \\ \multicolumn{7}{l}{ }\\ 2.1 & - & - & - & - & - & - \\ 2.2 & - & - & - & - & - & - \\ 2.3 & - & - & - & - & - & - \\ \multicolumn{7}{l}{ }\\ 3.1 & 307 & 191 & 424 & \textbf{108} & 157 & 119 \\ 3.2 & 307 & 217 & 418 & \textbf{154} & 192 & 162 \\ 3.3 & 307 & 247 & 406 & \textbf{199} & 226 & 206 \\ \multicolumn{7}{l}{ }\\ 4.1 & 75 & 47 & 105 & \textbf{26} & 39 & 29 \\ 4.2 & 75 & 53 & 103 & \textbf{38} & 47 & 40 \\ 4.3 & 75 & 60 & 101 & \textbf{49} & 55 & 50 \\ \multicolumn{7}{l}{ }\\ 5.1 & 17 & 11 & 25 & \textbf{6} & 9 & 7 \\ 5.2 & 17 & 12 & 25 & \textbf{9} & 11 & \textbf{9} \\ 5.3 & 17 & 14 & 24 & \textbf{11} & 12 & \textbf{11} \\ \multicolumn{7}{l}{ }\\ 6.1 & 17 & 21 & - & 25 & \textbf{15} & 17 \\ 6.2 & \textbf{17} & 21 & - & 36 & 19 & 24 \\ 6.3 & \textbf{17} & 21 & - & 47 & 22 & 30 \\ \multicolumn{7}{l}{ }\\ 7.1 & \textbf{75} & 95 & - & - & 608 & - \\ 7.2 & \textbf{75} & 95 & - & - & 733 & - \\ 7.3 & \textbf{75} & 95 & - & - & 858 & - \\ \multicolumn{7}{l}{ }\\ 8.1 & 51 & 56 & 482 & 41 & 38 & \textbf{36} \\ 8.2 & 51 & 60 & 482 & 59 & \textbf{46} & 49 \\ 8.3 & \textbf{51} & 63 & 482 & 76 & 55 & 62 \\ \bottomrule \end{tabular} \end{table} The proportion of samples that concluded superiority and the required sample size are presented in Tables \ref{tab:CompareRules_pSup} and \ref{tab:CompareRules_nStop} respectively. Type I error rates were properly controlled around $\alpha=.05$ for each decision rule under its least favorable data generating mechanism. The power was around $.80$ in all scenarios with true superiority. Moreover, average treatment differences were estimated without bias (smaller than $0.01$ in all conditions). Given these satisfactory error rates, a comparison of sample sizes provides insight in the efficiency of the approach. We remark here that a comparison of sample sizes is only relevant when the decision rules under consideration have a meaningful definition of superiority. Further, in this discussion of results we primarily focus on the newly introduced Compensatory rule in comparison to the other decision rules. The results demonstrate that the Compensatory rule consistently requires fewer observations than the All rule, and often - in particular when treatment differences are equal (i.e. treatment differences $3-5$) - than the Any and the Single rule. Similarly, the Any rule consistently requires fewer observations than the All rule and could be considered an attractive option in terms of sample sizes. Note however that the more lenient Any rule may not result in a meaningful decision for all trials, since the rule would also conclude superiority when the treatment has a small positive treatment effect and large negative treatment effect (i.e. treatment difference $7$); A scenario that may not be clinically relevant. The influence of the relation between outcomes is also apparent: Negative correlations require fewer observations than positive correlations. The variation due to the correlation is considerable: The average sample size almost doubles in scenarios with equal treatment differences (e.g. data generating mechanisms $3.1$ vs. $3.3$ and $4.1$ vs. $4.3$). Comparison of the three different Compensatory rules further highlights the influence of weights $\bm{w}$ and illustrates that a Compensatory rule is most efficient when weights have been optimized with respect to the treatment differences and the correlation between them. The Compensatory rule with equal weights (C-E) is most efficient when treatment differences on both outcomes are equally large (treatment differences $3-5$), while the Compensatory rule with unequal weights for uncorrelated outcomes (C-UU) is most efficient under data generating mechanism $8.2$. The Compensatory rule with unequal weights, optimized for negatively correlated outcomes (C-UC) is most efficient in data generating mechanism $8.1$. The Compensatory is less efficient than the Single rule in the scenario with an effect on one outcome only (treatment difference $6$). Effectively, in this situation the Single rule is the Compensatory rule with optimal weights for this specific scenario $\bm{w}=(1,0)$. Utilizing the flexibility of the Compensatory rule to tailor weights to anticipated treatment differences and their correlations thus pays off in terms of efficiency. Note that in practice it may be difficult to accurately estimate treatment differences and correlations in advance. This uncertainty may result in inaccurate sample size estimates, as demonstrated in Appendix \textit{\nameref{app:compare_designs}}. The simulations in this appendix also show that the approach can be implemented in designs with interim analyses as well, which is particularly useful under uncertainty about anticipated treatment differences. Specifically, we demonstrate that 1) both Type I and Type II error rates increase, while efficiency decreases in a fixed design when the anticipated treatment difference does not correspond to the true treatment difference; and 2) designs with interim analyses could compensate for this uncertainty in terms of error rates and efficiency, albeit at the expense of upward bias. Further, Appendix \textit{\nameref{app:compare_priors}} shows how prior information influences the properties of decision-making. Informative priors support efficient decision-making when the prior treatment difference corresponds to the treatment difference in the data. In contrast, evidence is influenced by dissimilarity between prior hyperparameters and data, and may either increase or decrease 1) the required sample size; and 2) the average posterior treatment effect, depending on the nature of the non-correspondence. \section{Discussion}\label{sec:discussion} The current paper presented a Bayesian framework to efficiently combine multiple binary outcomes into a clinically relevant superiority decision. We highlight two characteristics of the approach. First, the multivariate Bernoulli model has shown to capture relations properly and support multivariate decision-making. The influence of the correlation between outcomes on the amount of evidence in favor of a specific treatment highlights the urgency to carefully consider these relations in trial design and analysis in practice. Second, multivariate analysis facilitates comprehensive decision rules such as the Compensatory rule. More specific criteria for superiority can be defined to ensure clinical relevance, while relaxing conditions that are not strictly needed for clinical relevance lowers the sample size required for error control; A fact that researchers may take advantage of in practice where sample size limitations are common \cite{VandeSchoot2020}. Several other modeling procedures have been proposed for the multivariate analysis of multiple binary outcomes. The majority of these alternatives assume a (latent) normally distributed continuous variable. When these models rely on large sample approximations for decision-making (such as methods presented by Whitehead et al. \cite{Whitehead2010}, Sozu et al. \cite{Sozu2010,Sozu2016}, and Su et al. \cite{Su2012}; see for an exception Murray et al. \cite{Murray2016}), their applicability is limited, since the validity of z-tests for small samples may be inaccurate. A second class of alternatives uses copula models, which is a flexible approach to model dependencies between multiple univariate marginal distributions. The use of copula structures in discrete data can be challenging however \cite{Panagiotelis2012}. Future research might provide insight in the applicability of copula models for multivariate decision making in clinical trials. Two additional remarks concerning the number of outcomes should be made. First, the modeling procedure becomes more complex when the number of outcomes increases, since the number of cells increases exponentially. Second, the proposed Compensatory rule has a linear compensatory mechanism. With two outcomes, the outcomes compensate each other directly and the size of a negative effect is maximized by the size of the positive effect. A decision based on more than two outcomes might have the - potentially undesirable - consequence of compensating a single large negative effect by two or more positive effects. Researchers are encouraged to carefully think about a suitable superiority definition and might consider additional restrictions to the Compensatory rule, such as a maximum size of individual negative effects. \newpage \bibliographystyle{SageV}
1306.0512
\section{Introduction} Eclipsing binaries have historically contributed a wealth to stellar astrophysics. They have been used to determine distances, compute fundamental stellar parameters, and test stellar evolution models. The \emph{Kepler} mission \citep{Borucki,Batalha} and its unprecedented precise photometry of $\sim 160,000$ stars, has allowed us to create a catalog of \numberCatalogEBs{} eclipsing binaries (hereafter EBs) in the \emph{Kepler} field \citep{KepEB4,KepEB2,KepEB1}. This catalog is rich in interesting objects for individual study and also presents a large sample of EBs for statistical analysis. In studying this sample, we can attempt to determine the occurrence rate of EBs, circumbinary planets, and multiple star systems. Some theories for short-period binary star formation call for the presence of a third-body. In these scenarios, the close binary was not created \emph{in situ}, but rather at a larger separation as a part of a multiple star system \citep{Bonnell}. Tidal friction and Kozai cycles between the inner-binary and a companion can cause the inner-orbit to shrink over time \citep{FabTrem}, and result in a hierarchical multiple system \citep{Reipurth}. The spectroscopy and imaging studies by \citet{Tok97} and \citet{Tok06} have found 40\% of binaries with periods less than 10 days, and 96\% with periods less than 3 days, have a wide tertiary companion. The general interpretation of these findings is that the tightest binaries likely became hardened over time through interactions with the tertiary companion, and the system evolves toward an increasingly hierarchical configuration. Indeed, the SLoWPoKES study of ultra-wide binaries in the Sloan Digital Sky Survey \citep{slowpokes} found that the widest visual pairs with physical separations of 0.01--1 pc, in fact contain a tight binary $\sim$80\% of the time \citep{law}, again confirming the general picture that tight binaries are nearly always accompanied by wide tertiaries and that the tightest binaries are accompanied by the widest tertiaries. Discovery and study of these multiple systems gives new insight into the physics of EBs. Statistically, we can compare observed rates of multiple systems to theoretical models for short-period binary formation. We can also model each system individually to study the disruptive dynamical effects seen in some cases. \citet{KepEB4} determines ephemerides for the entire \emph{Kepler} Eclipsing Binary Catalog. If there are no external effects, a linear ephemeris will correctly predict all eclipse times of an EB. By measuring the exact time of each eclipse for a particular binary and comparing it to the calculated time from the linear ephemeris, we can create an ETV curve (`eclipse timing variations'; sometimes also referred to as an O-C diagram). Any trend in these timing residuals may be the result of one or more physical effects occurring in the system. Using transit timings and eclipse timings to find exoplanets is a well-known method \citep{Schwarz}. \citet{Fab}, \citet{Ford} and \citet{Steffen} used transit timings to detect and study multiple planetary systems, while \emph{Kepler} 16 \citep{Kep16}, 34, and 35 \citep{Kep34} were validated, in part, through their eclipse timing variations. The processes that can induce ETVs, which are the focus of this paper, include the following: \begin{itemize} \item Light Time Travel Effect (LTTE): a third-body perturbing the center of mass of the binary system creates a light-time delay along the line of sight which can cause eclipses to appear earlier or later than expected. \item Non-hierarchical third-body: the presence of a third-body actually changes the period of the binary over time. \item Mass transfer: mass transfer between the components in the binary changes the period. \item Gravitational Quadrupole Coupling (Applegate effect): spin-orbit transfer of angular momentum in a close binary due to one of the stars being active produces period changes up to $10^{-5}$ times the binary period \citep{Applegate}. \item Apsidal Motion: the rotation of the line of apsides causes a change in the time between primary and secondary eclipses even though the period remains unchanged (requires an eccentric orbit). \item Spurious Signals: due to spots and other effects that distort the EB light curve. \end{itemize} \citet{Rapp} previously published a list of 39 candidate third-body \emph{Kepler} systems using eclipse times and \citet{Gies} published a preliminary study on timing variations in 41 \emph{Kepler} Eclipsing Binaries. \citet{KepEBetv1} will provide eclipse times for detached binaries, and this paper provides eclipse times for close binaries. Together, these two papers will comprehensively cover all \numberCatalogEBs{} binaries in the catalog. \emph{Kepler}'s essentially uninterrupted observing over a long time baseline presents the opportunity to precisely time the eclipses and detect any underlying signals due to third bodies, apsidal motion, dynamical interaction, etc. Due to the large number of EBs in the entire catalog, it is necessary to create an automated method for timing eclipses across the catalog. Short period and overcontact systems present a particular challenge due to spot activity and data convolution, due to a relatively long integration time. In this paper we discuss our method for automating eclipse timings for close \emph{Kepler} EBs in Section 2. Eclipse timings are reported for \numberShortEBs{} binaries in the catalog in Section 3. In Section 4, light time travel effect models for the \numberTM{} that are flagged as potential third-body candidates are also provided. We discuss our findings in Section 5 in the context of binary formation and evolution theory, and summarize our conclusions as well as information for accessing the products of our comprehensive eclipsing timing measurements in Section 6. \section{Data and Methods} \subsection{Sample of Eclipsing Binaries} \citet{KepEB4} will update the \emph{Kepler} Eclipsing Binary Catalog, raising the count of EBs from 2165 to \numberCatalogEBs{}. The database is kept up-to-date with future data and revisions at \texttt{http://keplerEBs.villanova.edu}. As changes and updates are made to the catalog, ETVs are being recomputed and updated automatically and made available in real-time through the online catalog. \citet{KepEBetv1} will provide eclipse times for binaries with flat out-of-eclipse regions, covering most of the detached binaries with periods greater than 1 day. There we locally detrend each eclipse and use a piecewise Hermite spline template to determine the time of mid-eclipse. This technique performs well on the set of detached systems but is not optimal for overcontact systems, systems with strong reflection effects or tidal distortion, or short-period binaries with only a few points in each eclipse due to \emph{Kepler}'s 30 minute cadence. For this reason, we divide the catalog based on the morphology parameter as described in \citet{KepEB3}. This parameter is a value between 0 and 1 which describes the ``detachedness'' of an eclipsing binary, with 0 being completely detached and 1 being overcontact or ellipsoidal. \citet{KepEBetv1} report timings for binaries with a morphology parameter less than 0.5. Our method addresses and determines eclipse times for the remainder of the \emph{Kepler} Eclipsing Binary Catalog. The distribution of the catalog between these two methods is shown in Fig.~\ref{etvcat}, with \numberShortEBs{} binaries in the sample for this paper. \begin{figure}[h] \plotone{fig1.png} \caption{Period vs morphology parameter for the binaries in the \emph{Kepler} Eclipsing Binary catalog. Objects included in this paper have a morphology parameter greater than 0.5.} \label{etvcat} \end{figure} \subsection{Light Curve Preparation} We detrend and phase ``SAP'' (simple aperture photometry) \emph{Kepler} data through Q16 as described in \citet{KepEB1}. The upper envelope of the raw data is fit with a chain of Legendre polynomials using a sigma-clipping technique and manually setting the breaks between sections and orders of the polynomials. The data are then divided by this fit, resulting in a flat baseline. These detrended data are then phased on the linear ephemeris as reported in \citet{KepEB4}, and used as input into the ETV code, described below. \subsection{Measuring Eclipse Times} We fit a polynomial chain to the phased light curve data as described in \citet{Prsa08}. This analytic function is a chain of four polynomials that is continuous, but not necessarily differentiable, at knots which were optimized to find the best overall solution. This function does not represent a physical model, but rather analytically describes the mean phased shape of the binary light curve, an example of which can be seen in Fig.~\ref{ecl_bounds}. \begin{figure}[h] \plotone{fig2.png} \caption{Typical polyfit and eclipse bounds for a semi-detached binary. The polyfit knots are indicated with the squares and solid vertical lines, with the polyfit drawn in white over the data. Data considered as part of the primary eclipse are shown in black while those belonging to the secondary eclipse are shown in gray. The eclipse bounds are set at the arithmetic bisector of the adjacent knots and are shown with dashed vertical lines.} \label{ecl_bounds} \end{figure} We then take this analytical representation and, using a combination of heuristic and bisection approaches, determine the horizontal shift required to minimize the $ \chi^2 $ (cost function) for each individual eclipse as shown in Fig.~\ref{bisection}. In order to minimize the effect due to spots or imperfect detrending, a vertical shift is first determined using linear least squares for each eclipse and is applied before computing cost functions for horizontal shifts. The cost function is initially sampled at 20 evenly-spaced phase shifts between -0.05 and 0.05 phase. The minimum of this sampling is then used as the center of the bisection algorithm to quickly find the local minimum of the cost function. The resulting $\chi^2$ values are unusually large because the errors on the \emph{Kepler} data are only formal and do not include any absolute calibration effect \citep{Jenkins}. Therefore, for each eclipse, we normalize the entire cost function such that the minimum cost is set to $N-p-1$, where $N$ is the number of data points used for that eclipse and $p$ is the degrees of freedom, which we take to be 1. This reduced cost function is then used to compute 1-sigma errors on each timing to correspond to the $\Delta \chi^2 = 1$ contour. \begin{figure}[h] \plotone{fig3.png} \caption{Reduced cost function ($\chi^2$) values, shown as x's, are computed heuristically (top-left) for 20 evenly-spaced phase-shifts within 0.05 phase, shown by the dotted lines in all panels. The best fit of these is shown with the dashed line in the top-left panel. A bisection approach (top-right) is then applied in the area surrounding this estimate, as shown by the dot-dashed lines. This results in a final minimum at the phase shift denoted by the solid line. The bottom plot shows the data for a single eclipse along with the polyfits for the respective shifts noted above.} \label{bisection} \end{figure} For the shortest binaries in the catalog, however, the long-cadence data result in significant phase-smearing and limits our method to a very minimal number of points per cycle to determine a fit. If there were to be a third-body, the signal would likely be buried in the noise induced by these factors. For this reason, we include as many data points as possible in each eclipse timing. Each data point is considered to belong to an eclipse if its phase as determined by the initial linear ephemeris is within bounds. We initially set these bounds to be the mid-point between polyfit knots in the out-of-eclipse region as shown in Fig.~\ref{ecl_bounds}. To improve results for particular objects being studied individually, changing these bounds to use the knots (instead of the mid-points) can sometimes lower the systematics in the signal. For any given eclipse, if the region between these bounds is not fully sampled or does not have at least 3 data points, then timings are not computed for that eclipse. Eclipse timings are then compared to the values expected from the linear ephemeris as reported by \citet{KepEB4} to compute the residuals and test for the presence of an ETV signal. \subsection{Dealing with Sources of Spurious ETV Signals} Due to a typically small number of points per eclipse, our timings are sensitive to various imperfections in the data processing, affecting the measured eclipse time and potentially introducing noise and/or fictitious signals in the ETV signal. Instrumental or astrophysical pulsations on top of the binary signal can change the shape of a single eclipse which can mimic a timing variation. The detrending process attempts to remove these additional signals, but is not perfect, struggles at removing signals that happen during eclipse, and can also introduce spurious signals. Also, all polyfits in the current version of the catalog use chains of four second-order polynomials, which does not always result in the ideal fit and can leave slight phase-dependent residuals. For the purpose of pipeline processing, we limit ourselves to second order polynomials, but note that, for special cases and in-depth studies, higher precision timings can be obtained by increasing the order of the fit. Until all polyfits are updated to a higher order in the future, we will use the second-order fits and manually run ETV signals with a higher order for any individual ETV signals that warrant further study. In the cases when a binary has a period that is near-commensurate to \emph{Kepler}'s 29.44 minute cadence, the period and cadence may beat, which results in a separate spurious signal. Any combination of these effects can cause issues in determining true and precise eclipse times when dealing with only a few data points. Fig.~\ref{anti-phase} demonstrates how the cost function for the phase shift is affected by the vertical discrepancy in the out-of-eclipse region, creating a fictitious signal in which the ETVs of the primary and secondary eclipse are in anti-phase. The left of Fig.~\ref{anti-phase} plots four different eclipses, showing that over time the data in the out-of-eclipse region can be higher on either the right or the left. When measuring timings for the primary and secondary separately, the cost function will artificially be minimized by ``pulling'' the analytic function towards the region with lower flux. Since this will affect the primary and secondary in the opposite direction, we can mitigate for this effect by also running the fit over the entire phase. This effectively averages out the anti-phase effect in the primary and secondary eclipses, projecting the real ETV signal of the entire system. Fig.~\ref{anti-phase-beforeafter} shows two cases where the anti-phase signal was removed, clearly showing whether there is a presence of any underlying ETV signal. These signals that show a ``random walk'' nature are discussed by \citet{Tran}. Unfortunately, since there is no rigorous way to discriminate between true and fictitious anti-phase signals, this process would also hide a physical ETV signal such as apsidal motion. Since we are dealing with short-period binaries, most of these orbits will be quite circular so we do not expect to be able to detect any systems with apsidal motion anyway. \begin{figure}[h] \plotone{fig4.png} \caption{Determining eclipse timings using both eclipses will cancel the anti-phase effect and reveal any underlying signal. The plots on the left show a phased light curve with primary eclipse data in black and secondary in gray with the analytic `polyfit' in white. These four plots highlight the data during individual cycles (shown in white on the left) at four different times noted in the ETV plots with the dashed line, showing the presence of spots. The plots on the right show the ETV as measured at for primary and secondary eclipses separately (middle) and full phase (right). } \label{anti-phase} \end{figure} \begin{figure}[h] \plotone{fig5.png} \caption{ETVs for KIC 6880727 (left) and 4451148 (right) determined for primary and secondary eclipses separately (top) and together (bottom). KIC 6880727 (left) shows an example with no underlying signal under the antiphase ``noise'', while KIC 4451148 (right) shows a possible underlying third-body signal. Typical errors for ETV measurements are shown to the left of the data.} \label{anti-phase-beforeafter} \end{figure} \subsection{Short-Cadence Data} If a binary has short-cadence data available, they are usually limited to a short time baseline. Since we are generally looking for long period trends in the ETV signal, we measure timings from the full long cadence dataset. We also run timings on any available short cadence data in case some signal can be detected. A few short period binaries in the catalog that appear to be overcontact or ellipsoidal variables could actually be detached systems whose light curves are convolved by \emph{Kepler}'s 30 minute long cadence exposure. Since these systems are less prone to timing noise due to spots or mass transfer than true overcontact systems, the phase smearing and limited data points per eclipse are the main issues preventing us from recovering any ETV signal. For this reason, short cadence data were requested and obtained via Director's Discretionary Time for 31 short-period detached EBs in the catalog without previous short-cadence observations in the hope of detecting third-body ETV signatures which were not visible in the long-cadence data. Unfortunately, none seem to exhibit any significant ETVs. Eclipse times are computed for all available short-cadence data, but due to the longer baseline, long-cadence timings are used for detection and fitting of potential third-body orbits. \section{Results} \subsection{Precise Eclipse Times} Eclipse timing variations on the individual eclipses and the entire phase have been run for all objects with a morphology parameter greater (less detached) than 0.5 in the latest \emph{Kepler} Eclipsing Binary Catalog. Our method requires at least 3 data points per timing, which allows us to get primary and secondary eclipse timings individually for long cadence data of binaries with periods as short as 3 hours and full phase timings for binaries as short as 1.5 hours. We are able to determine timings for any binary with short cadence data, but since short cadence data availability is sparse and generally not for the whole length of the mission, short cadence ETVs are determined separately. Plots and data for detrended light curves and eclipse times for the entire sample are available as a part of the \emph{Kepler} Eclipsing Binary online catalog at \texttt{http://keplerEBs.villanova.edu}. An excerpt of the eclipse times is shown in Table \ref{tableetvs}. With the third version of the catalog being released shortly, the database will be updated as new data become available and ephemerides are further refined. This ETV code is incorporated into the pipeline: as objects are updated or added, their ETVs are recomputed and updated in real-time. As ETVs are computed, the ephemerides in the catalog are refined by fitting a linear fit through the entire-phase timings and adjusting the values as necessary to get a ``flat'' trend. For any ETV with a long-term sinusoidal trend, this could introduce systematics depending on the part of the sine curve observed and used to fit the linear trend. In particular, for very long ETV signals (of the order of 1000 days and more), the measured orbital period of the binary will be anomalous because the variation cannot be accounted for from available data. \subsection{Causes of an ETV Signal} All ETV measurements were examined by eye for the presence of any interesting signal, discarding any that seem to be spurious based on their individual primary and secondary eclipse timings. We do not expect to see evidence of apsidal motion in many of our targets due to their short periods and, consequently, circular orbits. We also do not expect to be able to detect any signals due to gravitational quadrupole coupling. This mechanism is able to create period changes with amplitudes on the order of $10^{-5}$ times the period of the binary, meaning a maximum of 3.5 seconds for a binary with a period of 4 days, falling well within our noise limits. ETV signals that are sinusoidal in nature or show any sign of curvature are flagged and fit for both a third-body signal and a parabolic mass transfer model. For the cases where we only see a sign of curvature and not a full cycle, we could either be seeing a section in a long period third body signal or mass transfer. To determine whether we consider the signal as a candidate third body or mass transfer, we compare the two models using the Bayesian Information Criterion \citep{BIC}: \begin{equation} BIC = n \ln \left( \frac{1}{n} \sum \left( x_i - \hat{x_i} \right)^2\right) + k \ln n \end{equation} where $x_i$ are the data, $\hat{x_i}$ the model, $n$ the number of data points, and $k$ is the number of parameters used in the fit. In the case of the eccentric LTTE model $k=6$, for the circular LTTE model $k=4$, and for the mass transfer model $k=3$. The better fit, as determined by the lower BIC value, then determines whether we consider the signal as a candidate third body or mass transfer. \subsection{ETVs with Parabolic Signals} \numberTT{} ETV signals were better fit by a parabola than an LTTE orbit, and are possibly caused by mass transfer or the Applegate effect instead of the presence of a third body \citep{Hilditch}. A selection of these signals are shown in Fig.~\ref{etv_MT}, all KICs are listed in Table \ref{tableMT}, all of which are available on the online catalog. \subsection{ETVs with Third-Body Signals} \numberTM{} binaries (\rateTriple{} of the sample) were flagged as candidate third bodies. The results of the model fits are reported in Table \ref{tablerapp} with a selection plotted in Fig.~\ref{etv_LTTE}. Based on the fitted period, we then divided these third body candidates into three sections. The first group contains third body signals with periods less than 700 days, such that there are at least two full cycles of the signal present in data through Q16. These systems have the highest confidence and are most likely due to the presence of a tertiary component. The second group contains signals with periods between 700 and 1400 days, such that there is at least one full cycle present. The last group contains signals with periods longer than 1400 days. Often these detections merely show some sign of curvature in the ETV signal and so a full sinusoidal signal cannot yet be confirmed. For this reason the fits generally have large errors and many of these may not even be true triple systems, particularly the signals on the closest binaries which are more likely to be due to mass transfer. \citet{Gies} presented an initial study of eclipse timings in 41 \emph{Kepler} binaries. Of their entire sample of 41 binaries, 40 are still in the \emph{Kepler} EB Catalog (KIC 4678873 has since been removed from the catalog as a false positive), 32 fall under the scope of this paper (have a morphology less detached than 0.5), and 9 appear in our list of third-body signals. They identified 14 out of their original 41 as candidate third-body systems, with others being identified as likely caused by starspots, pulsations, and apsidal motion. Of their 14 candidate third-body systems, all 14 are still in the \emph{Kepler} EB Catalog, 12 fall under the scope of this paper, and 9 appear in our list of third-body signals. Those that appear in both lists are noted in Table \ref{tableresults}. Binaries that they list as candidate third-body systems, but we do not, either show significant noise or would be very long period LTTE orbits. \citet{Rapp} recently reported 39 triple-star \emph{Kepler} binaries due to ETV signatures. Of these 39, 21 fall under the scope of this paper, 19 of which also appear in our list of third-body signals, with the other two determined to be unlikely caused by a third-body due to their very short periods and notable spot activity. The detections that overlap both of these studies are also noted in Table \ref{tablerapp}. In most cases, the model fits from both studies are consistent. In general, due to our treatment of the full \emph{Kepler} dataset now available, tertiary parameters should now be more precise and longer period third body signals are now more apparent. Any disagreement is likely due to a slightly differ inner-binary ephemeris or the addition of the physical delay in their models (discussed further below). \section{Analysis of Third Body Signals} \subsection{Light Time Travel Effect Analysis} \citet{Bork} presents analytic functions for the light time travel effect (LTTE) component of the ETV residual signal. Using the same form as \citet{Rapp}, the timings can be expressed by \begin{equation} ETV_\text{LTTE} = A_{LTTE} \left[ \left( 1 - e_3^2 \right)^{1/2} \sin E_3(t) \cos \omega_3 + \left( \cos E_3(t) -e_3 \right) \sin \omega_3 \right] \end{equation} where \begin{align} E_3(t) &= M_3(t) + e_3 \sin E_3(t) \\ M_3(t) &= \left( t - t_0 \right) \frac{2 \pi}{P_3} \\ A_{LTTE} &= \frac{G^{1/3}}{c (2 \pi )^{2/3}} \left[ \frac{m_3}{m_{123}^{2/3}} \sin i_3 \right] P_3^{2/3} \end{align} and $t_0$ is a time offset, $m_3$ is the mass of the third body, $m_{123}$ is the mass of the entire system, and $P_3$, $i_3$, $e_3$, $\omega_3$, $E_3(t)$, and $M_3(t)$ are the period, inclination, eccentricity, argument of periastron, eccentric anomaly, and mean anomaly of the third body orbit, respectively. This expression was then used to fit all ETVs flagged as potential third body signals. The period was first estimated using Lomb-Scargle periodogram and used as input into a series of Levenberg-Marquardt fits, each using a different starting guess for eccentricity. The fit with the lowest chi-squared was then kept and the errors estimated from the covariance matrix. If the final fit had an eccentricity consistent with 0, then $e_3$ was set to 0, $\omega_3$ to $\pi/2$, and the fitting was redone with circular constraints to get appropriate error estimates on the remaining parameters. This gives values and estimated errors for $P_3$, $e_3$, and $A_{LTTE}$ (Table \ref{tableresults}). A sample of some of these ETV signals and their respective fits can be seen in Fig.~\ref{etv_LTTE}, with fits for all candidate third body signals available in the online version of the \emph{Kepler} EB Catalog. We can only provide estimate periods for the sample of ETV signals with less than one full cycle in the data. Even these periods can be significantly biased based on the section of the cycle that is in the observed baseline and should be treated with reservation. These very long period cases are provided separately at the end of Table \ref{tableresults} \subsection{Physical Delay} \citet{Rapp} included physical delays in their models of 39 \emph{Kepler} binaries with possible third-body ETVs, sometimes contributing largely to the overall model. This dynamical effect occurs when the presence of a third body changes the period of the inner binary. Fig.~\ref{phys_hist} shows the distribution in their targets and the overlapping targets of the ratio of the amplitude of the physical delay compared to the total amplitude in the ETV signal. 21 of their targets overlap with ours, but due to the short-period inner-binary, the physical delay rarely contributes significantly to these model. From their results, it seems that the LTTE effect dominates over the physical delay for binaries with periods less than 3 days, which covers the vast majority of our targets. \subsection{Objects with Tertiary Eclipses} For some of these binaries with LTTE signals, tertiary eclipses have also been found that confirm the presence and third body period, and significantly constrain the inclination of the third body. Any binary which was identified to have a possible third body due to its ETV signal and also has a detected tertiary eclipse is noted as such in Table \ref{tableresults}. KIC 2856960, for instance, has an inner-binary period of 0.259 days with an ETV signal resulting in a LTTE fit with a period of $205.5 \pm 0.1$ days. This period is consistent with the previously determined period for the tertiary events of 204.25 days (Fig.~\ref{2856960}). This is also consistent with the LTTE period of $205 \pm 2$ days reported by \citet{Lee}, and the tertiary event period of $204.2$ days by \citet{Armstrong}. In the case of KIC 2835289 (Fig.~\ref{2835289}), we have only observed one potential tertiary event in Q9. Without at least three consecutive events, we cannot rigorously confirm that the eclipse is a third-body as opposed to a blended eclipsing binary. However, the eclipse seems to show the eclipse of both stars in the inner-binary and ETV signal shows a possible long-term third body orbit suggesting a period of approximately 800 days. If this proves to be a true third body, then \emph{Kepler} just missed an event before the beginning of the mission and may have observed another event in Q17, which has yet to be processed. KIC 6543674 also shows a single tertiary eclipse in Q2. A second tertiary eclipse was missed during a break in the \emph{Kepler} data, but we were able to observe an additional tertiary event from the ground, giving a third body period of $\sim 1100$ days \citep{Thackeray-Lacko}. In this case, we do not have a full orbit of the ETV signal and the LTTE model period is quite uncertain. \subsection{Objects with Depth Variations} \numberDV{} binaries that show third-body ETV signals (KIC \DVKICs{}) also show constant changes in their eclipse depths (Fig.~\ref{DV}), which could either be caused by a change in inclination or apsidal motion perhaps induced by the third body. We plan to follow these up later with full photodynamical models. \subsection{Potential Fourth Body Signals} It is also possible that some of these ETVs could be composed of multiple signals. KIC 5310387, 6144827, 8145477, 11612091, and 11825204, for example, may have both an LTTE and quadratic component or two LTTE signals as is shown in the residuals in Fig.~\ref{quad}. In general, the stronger signal is fitted and noted. \section{Discussion} In this study we find a third body rate of \rateTriple{} in our sample of close binaries, nearly all of which have inner binary periods shorter than 3 days (Fig.~\ref{ptrip}). This is much lower than the third body rate of 96\% found by the previous studies mentioned. However, our identification of tertiary companions is certainly a lower limit for several reasons. First, our ability to detect a third body is very sensitive to both inclination and mass of the third body, such that low-mass tertiaries and/or tertiaries whose orbital planes are highly inclined relative to the inner binary orbital plane do not present detectable LTTE effects. Of our total sample of \numberShortEBs{} binaries, \numberTertiaryEclipse{} (\rateTertiaryEclipse{}) show an LTTE orbit and visible tertiary eclipses. \numberTMsectionAB{} (\rateTMsectionAB{}) have LTTE orbits with periods shorter than the span of our photometric data but do not show tertiary eclipses, suggesting that the eclipses fell in a gap in the data or the orbits are not well enough aligned to show eclipses. Thus there is evidence from these examples that in a few percent of cases we are indeed missing true third bodies because of inclination non-alignments. \numberTMsectionC{} (\rateTMsectionC{}) have LTTE orbits with periods longer than the photometric baseline. In these cases we do not have well constrained periods and our chances of detecting a tertiary eclipse are slim. A second reason that our determination of the third-body occurrence is likely a lower limit is that the very close binaries that comprise our sample here generally present more noise in the ETV signal, which could easily bury a weak LTTE signal. We have employed a method that minimizes false positives due to spurious ETV signals, and thus necessarily have eliminated some potentially true LTTE signals. Third, and perhaps most important, the limited timespan of the currently available \emph{Kepler} data ($\sim$1400 days) significantly restricts us to detect third bodies with orbital periods comparable to or shorter than 1400 days. Relative to the full span of tertiary separations found in previous works \citep{Tok97,Tok06,slowpokes,law}, with separations as large as $\sim$1 pc, we are at present sampling only the relatively closest tertiary companions. Indeed, \citet{Tok06} found among tight binaries that the rate of third bodies with orbital periods less than $\sim$3 years (comparable to our limit based on the duration of the available \emph{Kepler} data) is 15\% $\pm$ 3\%. Thus our finding of a third-body occurrence rate with a period less than 1400 days of \rateTMsectionAB{} is compatible with the expected rate, though it appears we are likely still missing a fraction of some systems for the reasons already mentioned. The distribution of periods of potential third body orbits is also shown in Fig.~\ref{ptrip}. We can clearly see a falloff in detection past the current length of the \emph{Kepler} mission of $\sim$ 1400 days, as expected. However, for third-body periods shorter than $\sim$ 1400 days, for which our detectability is relatively good, the occurrence rate does appear to increase toward longer third-body periods, consistent with the period distribution of third bodies among tight binaries found by \citet{Tok06}. Furthermore, we find that the triples on the widest orbits are found around the shortest period binaries, which is consistent with models that tighten the inner binary orbit through the presence, and gradual widening, of a companion. \section{Summary and Conclusions} We presented our technique for computing precise eclipse timings for \numberShortEBs{} close eclipsing binaries in the \emph{Kepler} Eclipsing Binary Catalog. These precise eclipse timings are complemented by the eclipse timings to be reported by \citet{KepEBetv1} for longer period, detached EBs. For the EBs whose timings are reported here, our method has been developed specifically to deal with the challenge of constantly changing light levels arising from spots and other phenomena that distort the light curves and could cause spurious eclipse timing variation (ETV) signals. EBs with ETV signals suggesting the possible presence of a third body have been identified and have been fit with a LTTE orbit model in order to determine the likely parameters of the third bodies. In the current sample of \numberShortEBs{} close EBs, we have identified \numberTM{} that likely have tertiary companions. The parameters of these fits are also available online and are updated as new data become available. Our measured occurrence rate of \rateTMsectionAB{} of close binaries with tertiary companions with periods up to $\sim 1400$ days (limited by the current timespan of \emph{Kepler} data), appears to be broadly consistent with the expectation that $15 \pm 3$\% of close binaries will have tertiaries of such periods \citep{Tok06}. Indeed, we already find in our data that the periods of third bodies rise among the tightest binaries, consistent with previous work that has found a very high rate of third bodies in very wide orbits around the tightest binaries, presumably the result of dynamical tightening of inner binaries through widening of the tertiary. Eclipse timings for all EBs are updated in real-time and are freely available as a community resource at \texttt{http://keplerEBs.villanova.edu}. \section*{Acknowledgements} We thank Nathan De Lee and Phil Cargile for helpful discussions and Darin Ragozzine, Eric Ford, and Joshua Pepper for their feedback. This project is supported through the \emph{Kepler} Participating Scientist Award NSR303065. KEC and KGS acknowledge support from NASA ADAP grant NNX12AE22G. JAO and WFW gratefully acknowledge support from the NSF via grant AST-1109928, and also NASA via the \emph{Kepler} PSP grant NNX12AD23G. \emph{Kepler} was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate. \clearpage
1809.01643
\section{Introduction}\label{sec:intro} In difference-in-differences identification is ensured by the fact that the subpopulation that will be exposed to the treatment (treatment group) and the subpopulation that will not be exposed to the treatment (control group) would have developed equally in the absence of treatment.\footnote{For textbook treatments see for example \textcite{Athey_Imbens_2017}, \textcite{Lechner_2010} or \textcite{Imbens_Wooldridge_2009}.} Identification often relies on the assumption that the common trend holds conditional on covariates. Any underlying factor differently shifting the potential outcomes under non-treatment for the treatment group and the control group needs to be controlled for. However, even if the researcher can credibly identify the factors that may lead to common trend confounding, it is still unclear in what form covariates should ultimately enter the statistical model for several reasons.\\ Crucially, the statistical model depends on assumptions about the relation between the treatment group identifier, time and observed covariates. With cross-sectional data covariates might be needed to account for imbalances between treatment and control group \textit{and} across time. With some notable exceptions (\cite{Lechner_2010}, \cite{Hong_2013}, \cite{Stuart_Huskamp_Duckworth_Simmons_Song_Chernew_Barry_2014}, \cite{Lu_Nie_Wager_2019}) most studies in semiparametric difference-in-differences exclude time-varying treatment group compositions and covariates (e.g., \cite{Heckman_Ichimura_Todd_1997}, \cite{Abadie_2005}, \cite{SantAnna_Zhao_2020}, \cite{Chang_2020}). This paper investigates semiparametric difference-in-differences models under various assumptions on how covariates, time and treatment group composition are related. Efficient influence functions are derived under more or less restrictive assumptions and for different sampling schemes. We present various identification and efficiency results for low-dimensional semiparametric difference-in-differences models. Our results are sensitive to assumptions about how covariates enter the model. In particular, our results hint at a trade-off between the strength of the assumptions the researcher is willing to impose on the model and the efficiency bound that can be achieved under such assumptions. Further, our results suggest that there are cases where we might want to include covariates even if not needed for identification, as they could increase the precision of some of the estimators. A comparison of the efficiency bounds for cross-section and panel data allows to draw interesting conclusions about the efficiency loss when panel data is not available. We therefore contribute to the literature on semiparametric efficiency in causal inference settings (e.g., \cite{Hahn_1998}, \cite{Firpo_2007}, \cite{Froelich_2007}, \cite{Chen_Hong_Tarozzi_2008}, \cite{Cattaneo_2010}, \cite{Graham_Pinto_Egel_2016}, \cite{Lee_2018}). Such an analysis is typically based on the approach developed by \textcite{Newey_1990,Newey_1994} and \textcite{Bickel_Klaassen_Ritov_Wellner_1993}. \textcite{Chamberlain_1987,Chamberlain_1992} contributes an alternative approach based on moment conditions. \textcite{Graham_2011} establishes an equivalence result between the moment condition based approach and the approach of \textcite{Bickel_Klaassen_Ritov_Wellner_1993} for the general missing data problem. In parallel work \textcite{SantAnna_Zhao_2020} also consider efficiency theory for semiparametric difference-in-differences problems to derive efficient score functions. Their results crucially rely on a relatively strong stationarity assumption and are included in this paper as a special case. We also notice that a previous version of the present paper was the first that proposed efficiency bounds for the semiparametric difference-in-differences problem using \posscite{Graham_2011} equivalence result. It turns out that for the panel case the moment conditions exhaust all the information in the identifying assumptions while for the cross-sectional case they do not.\footnote{See \textcite{Zimmert_2018} on this. We do not follow \posscite{Graham_2011} approach in this version. However, we consider the insufficiency of the first and second stage moment conditions to exhaust all information necessary to derive the efficiency bound for the cross-sectional difference-in-differences case as an interesting topic for further research.}\\ The efficient influence functions derived imply plug-in estimators that allow to combine semiparametric difference-in-differences models with very flexible first stage estimators. This is important because there might be many different covariates that are supposed to measure the same economic channel for common trend confounding and it might be unclear in what functional form the covariates should be included in the model. These issues might be especially prevalent in difference-in-differences models. Often covariates like geographic or industry classifications are available for different levels of aggregation -- making covariate selection an even more tedious task. For standard parametric models usually used for difference-in-differences estimation (e.g., \cite{Card_1990}, \cite{Card_Krueger_1994}, \cite{Eissa_Liebman_1996}) or semiparametric models with parametric or nonparametric first stages (e.g., \cite{Abadie_2005}, \cite{SantAnna_Zhao_2020}) a high-dimensional covariate space will cause the estimator to break down. Advances in the supervised machine-learning literature\footnote{For an overview see e.g. \textcite{Hastie_Tibshirani_Friedman_2009}.} showed an immense potential to approach this problem by choosing a data-driven trade-off between the covariate dimension and the sample size at hand and were successfully integrated in causal inference settings (e.g., \cite{Belloni_Chen_Chernozhukov_Hansen_2012}, \cite{Zhang_Zhang_2014}, \cite{vandeGeer_Buehlmann_Ritov_Dezeure_2014}, \cite{Belloni_Chernozhukov_Hansen_2014}, \textcite{Athey_Imbens_Wager_2018}). This paper builds on the generic 'double machine-learning' framework developped in \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017}. A major insight from this work is that 'single'-robust estimators based on the treatment mechanism (e.g., \cite{Horvitz_Thompson_1952}, \cite{Hirano_Imbens_Ridder_2003}, \cite{Hahn_Ridder_2013}) or outcome based models (e.g., \cite{Hahn_1998}) are inappropriate with machine-learning generated first stages while 'double'-robust estimators (\cite{Robins_Rotnitzky_Zhao_1994}, \cite{Scharfstein_Rotnitzky_Robins_1999}) maintain good statistical properties. \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017} show that the (rate) double robustness properties can be used such that there is no effect of first stage nuisance parameter estimation under relatively weak convergence conditions of the first stage parameters. We modify and extent the double machine-learning framework in this paper. In contrast to \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017}, this paper does not rely on high-level conditions based on Gateaux differentiation to verify the rate double robustness properties. Alternatively, we provide easy-to-check conditions that cover a broad range of scores typically used in the causal inference literature. Focusing on a specific class of widely used score functions allows to derive generalizable convergence condition requirements. Crucially, this substantially reduces the computational burden when deriving the asymptotic properties of an estimator and requires fewer regularity conditions such as the existence of the derivative or the interchangeability of the derivative and the expectation operator. Further, some of the derived efficient influence functions imply a new class of plug-in estimators whose convergence conditions depend on the existence of higher-order moments of the outcome. These results do not trivially follow from existing theory (e.g., \cite[Section 5]{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017}). Our theoretical results on double machine-learning is therefore also useful beyond the scope of difference-in-differences estimation. Additionally, they allow us to derive first stage convergence conditions for different semiparametric difference-in-differences estimators. This enables to incorporate sophisticated supervised machine-learning algorithms that can cope with settings where the dimension of the covariate space is high. Plug-in estimators that follow from the derived efficient influence functions are shown to achieve the low-dimensional variance lower bound. Our results also indicate that for some cases there is a trade-off between estimation robustness and efficiency. Throughout the evolvement of this paper other related but independent work on semiparametric difference-in-differences estimation with machine-learning appeared. \textcite{Chang_2020} also considers difference-in-differences estimation under the strong stationarity assumption by directly applying the double machine-learning results of \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017}. His estimator does generally not attain the semiparametric efficiency bound. \textcite{Lu_Nie_Wager_2019} propose estimators that are robust against the distortion of the stationarity assumption but focus on another parameter, consider alternative estimation methods and do not derive efficiency results.\\ Finally, we adopt the methods to a well-known application and investigate whether the efficiency-robustness trade-offs matter in practice and the value added when using machine-learning methods instead of standard parametric estimation methods.\\ The following section introduces the setting and required notation. The development of semiparametric theory for the difference-in-differences problem will be the starting point of our analysis in Section \ref{sec:ideff}. Section \ref{sec:estinf} presents our results on estimation and inference. To assess the usability of the proposed methods, they are applied to real world data in Section \ref{sec:app}. The last section concludes. Most technical proofs are relegated to the Appendix. \section{Setting and notation}\label{sec:setnot} Random variables like $A$ are denoted by capital letters. They have realizations $A=a$ that are in the support of the random variable $\mathcal{A}$. $A=a$ has density $f_A(a)$. If $A$ is discrete we write $Pr(A=a)=f_A(a)$ as a shorthand. The cumulative distribution function is given by $F_A(a)$. Let $B$ be another random variable. Then independence between two random variables $A$ and $B$ is denoted by $A\perp B$. The expectation operator is defined by $\mathbb{E}$ and $\text{Var}$ is used as a shorthand for the variance. For a generic function $g=g(B)=g_{A}(B)$ we use $A$ in the subscript to remind us of the mapping $g: b\mapsto a$. The $L_p$ norm is denoted by $\left\lVert g(B)\right\rVert_p$. As a special case we use $\sup_{b\in\mathcal{B}}\lvert g(b)\rvert$ and $\lVert g(B)\rVert_{\infty}$ interchangeably to denote the supremum of the function. The infinum is $\inf_{b\in\mathcal{B}}\lvert g(b)\rvert$. Let $\beta$ be some parameter then we denote $\dot{g}(B,\beta)=\frac{\partial g(B,\beta)}{\partial\beta}$. Generically, $C>0$ denotes a constant.\\ Let $D$, $T$ and $G_{\tau}$ be binary indicator variables such that $d,t,g_{\tau}\in\{0,1\}$ where $\tau\in\mathcal{T}$ and either $\mathcal{T}=\{(d,t)\}$ or $\mathcal{T}=\{d\}$. In particular, $D=1$ for observations that belong to the treatment group, $T=1$ for observations that are observed in period 1 and $G_{d,t}=1$ if $D=d$ and $T=t$ and 0 otherwise and $G_d=1$ if $D=d$ and 0 otherwise.\footnote{Obviously for the latter case $G_d=D$. We introduce this notation to formulate results as general as possible throughout our exposition.} Denote the outcome variable by $Y$ and some further observed variables by $X$. We follow the established literature (e.g., \cite{Roy_1951}, \cite{Rubin_1974}) and let $Y^d(t)$ be the potential outcome variable that contains the potentially unobserved realizations of $Y$ for the state $D=d$ and $T=t$. The exposition additionally relies on the definition of some conditional expectations. In particular, we have $m_Y(d,t,x)=\mathbb{E}\left[Y|D=d,T=t,X=x\right]$ with $m_Y(x)=\sum_{d=0}^1\sum_{t=0}^1(-1)^{d+t}m_Y(d,t,x)$. Similarly, for $\Delta Y=Y(1)-Y(0)$ we have $m_{\Delta Y}(d,x)=\mathbb{E}\left[Y(1)-Y(0)|D=d,X=x\right]$ with $m_{\Delta Y}(x)=m_{\Delta Y}(1,x)-m_{\Delta Y}(0,x)$. Additionally, suppose that $A$ is a binary variable. Then we generically define the probabilities $p_{A=a}(b)=Pr(A=a|B=b)$, $p_{A}(b)=Pr(A=1|B=b)$, $p_{A=a}=Pr(A=a)$ and $p_{A}=Pr(A=1)$. For example we have $p_{D=d,T=t}(x)=Pr(D=d,T=t|X=x)$ and $p_{DT}=Pr(D=1,T=1)$. Note that the definition of $G_{\tau}$ allows us to flexibly write for example $m_Y(d,t,x)=m_Y(G_{d,t}=1,x)$ and similarly for the other parameters.\\ In difference-in-differences settings the researcher is generally interested in identifying the parameter $\theta=\mathbb{E}\left[Y^1(1)-Y^0(1)|D=1,T=1\right]$. It can be described as an average treatment effect on the treated (ATET) because the parameter is defined for those who actually receive the treatment ($D=1$, $T=1$). An average population effect cannot be identified because this would require a subpopulation for which the treatment vanishes between period $T=0$ and $T=1$ (for a discussion on this see \cite{Lechner_2010}). We also note that under a strong stationarity conditions or when panel data is available \textcite{Abadie_2005} shows that $\mathbb{E}\left[Y^1(1)-Y^0(1)|D=1\right]$ is identified. Notice that, without further assumptions, this parameter is not an ATET but an average treatment effect for the treatment group. Intuitively, panel data or a stationarity assumption ensures that the composition of the treatment group does not depend on $T$ and so $\mathbb{E}\left[Y^1(1)-Y^0(1)|D=1,T=0\right]=\mathbb{E}\left[Y^1(1)-Y^0(1)|D=1,T=1\right]$. As the ATET is identified under all assumptions made in this paper, we focus on this parameter but indicate whenever the treatment group effect equals the ATET. \section{Identification and efficiency bounds}\label{sec:ideff} \subsection{Repeated cross-sections} \begin{assumption}[Data-generating process CS]\label{ass:dgpcs} Let $W(t)=(Y(t),D(t),T=t,X(t))$. (i) The i.i.d. sample of two repeated cross-sections with $W=\{W(0),W(1)\}=(Y,D,T,X)$ with observations $i=1,...,N$ is observed; (ii) The joint distribution $F_{W(0),W(1)}(w(0),w(1))=F_W(w)$ exists. \end{assumption} Assumption \ref{ass:dgpcs} describes the data-generating process (DGP) for the repeated cross-sections. It guarantees that we can use the pseudo-sample $W_i$ with observations $i=1,...,N$ and emphasizes that we have to cope with a merged sample problem where the sample sizes of $W(0)$ and $W(1)$, $N(0)$ and $N(1)$ obey $\frac{N(0)}{N(1)}\rightarrow C$ (\cite{Abadie_Imbens_2006}, \cite{Graham_Pinto_Egel_2016}). Also notice that the existence of the joint distribution implies that the dimension of $X$ is fixed. We will relax this condition in Section \ref{sec:estinf}.\\ In what follows we discriminate five settings (CS-1) to (CS-5) that describe different assumption sets on the relation between $D$, $T$ and $X$. In (CS-1) we do not make any further assumptions. It is the fully robust setting. Settings (CS-2)-(CS-5) are comprised in Assumption \ref{ass:reldtxcs}. \begin{assumption}[Relation of $D$, $T$ and $X$]\label{ass:reldtxcs} The variables $D$, $T$ and $X$ are assumed to be related in the following ways. (CS-2) Conditional independence of $D$ and $T$, $D\perp T|X=x$ for every $x\in\mathcal{X}$; (CS-3) Independence of $X$ and $T$, $X\perp T$; (CS-4) Joint independence of $D$ and $X$ from $T$, $(D,X)\perp T$; (CS-5) Mutual independence of $D$, $T$ and $X$, $(D\perp T\perp X)$. \end{assumption} (CS-2) allows for time varying $D$ and $X$ but requires that all time variation in $D$ is fully captured by $X$. (CS-3) does not allow for time-varying $X$ but $D$ may still follow a time trend. (CS-4) implies the strong stationarity assumption used by \textcite{Abadie_2005}, \textcite{SantAnna_Zhao_2020} and \textcite{Chang_2020}. It excludes time variation in $D$ and $X$. (CS-5) is the `experimental' setting. Even though the outcome might depend on $X$, there are no imbalances neither between treatment and control group nor across time.\\ Since Assumption \ref{ass:reldtxcs} only contains conditions on the relation of observed random variables, it is in principal testable. Especially, a significant correlation between $D$ and $T$ rules out settings (CS-4) and (CS-5). Also notice that (CS-2) and (CS-3) are mutually exclusive and that the restrictiveness of the assumptions can be ordered as (CS-5), (CS-4), (CS-3)/(CS-2), (CS-1). \subsubsection{Identification} \begin{assumption}[Identification CS]\label{ass:idcs} For any $d,t\in\{0,1\}$ and $x\in\mathcal{X}$, \begin{itemize} \item[(i)] (Observational Rule) For each observation $i$, the outcome $Y_i=\sum_{d}\sum_{t}G_{{d,t}_i}Y_i^d(t)$ is observed; \item[(ii)] (Common Support) The propensity score $p_{D=d,T=t}(x)$ is bounded away from zero; \item[(iii)] (No Anticipation) $\mathbb{E}\left[Y^1(0)-Y^0(0)|D=1,T=0,X=x\right]=0$; \item[(iv)] (Conditional Common Trends) \begin{align*} &\mathbb{E}\left[Y^0(1)|D=0,T=1,X=x\right]-\mathbb{E}\left[Y^0(0)|D=0,T=0,X=x\right]\\ &=\mathbb{E}\left[Y^0(1)|D=1,T=1,X=x\right]-\mathbb{E}\left[Y^0(0)|D=1,T=0,X=x\right]. \end{align*} \end{itemize} \end{assumption} Assumption \ref{ass:idcs} yields an identification result for $\theta$ with cross-sectional data. The Observational Rule underscores that we only observe $Y$ and not $Y(0)$ and $Y(1)$ for every observation. It rules out that observations in the treatment group can be part of the control group or that observations in period $T=0$ are again observed in $T=1$. Notice that since we consider a pseudo-sample, this does not rule out individuals from an actual population from being re-sampled in the second cross-section. Common Support is necessary to guarantee the existence of conditional expectations. Since we are only interested in the ATET, the propensity score only needs to be bounded away from zero for identification. The No Anticipation condition rules out an effect of the treatment in period $T=0$ for the treatment group. The Conditional Common Trends condition requires that conditional on the covariates the treatment and the control group would have developed equally in the absence of the treatment. To allow for a more compact representation of the results, some further notation is introduced. Let $q_{CS;D=d,T=t}(X)$ denote the conditional probability function $p_{D=d,T=t}(X)$ under some of the specific assumptions on the relation of $D$, $T$ and $X$ in (CS-1) to (CS-5). For example we have $q_{CS-2;D=1,T=1}(X)=p_D(X)p_T(X)$. Equivalently, denote by $q_{CS;DT}$ the unconditional probability $p_{DT}$ under some specific assumption (CS-1) to (CS-5). \begin{lemma}\label{lm:idcs} Under Assumptions \ref{ass:dgpcs} and \ref{ass:idcs} the parameter $\theta=\mathbb{E}\left[Y^1(1)-Y^0(1)|D=1,T=1\right]$ is identified as $\mathbb{E}\left[m_Y(X)\frac{q_{CS;D=1,T=1}(X)}{q_{CS;DT}}\right]$. \end{lemma} \textit{Proof:} Notice that \begin{align*} \mathbb{E}\left[Y^1(1)-Y^0(1)|D=1,T=1,X=x\right]&=m_Y(1,1,x)-\mathbb{E}\left[Y^0(0)|D=1,T=0,X=x\right]\\ &-\mathbb{E}\left[Y^0(1)|D=0,T=1,X=x\right]+\mathbb{E}\left[Y^0(0)|D=0,T=0,X=x\right]\\ &=m_Y(1,1,x)-\mathbb{E}\left[Y^1(0)|D=1,T=0,X=x\right]\\ &-m_Y(0,1,x)+m_Y(0,0,x)\\ &=m_Y(X) \end{align*} using Assumptions \ref{ass:idcs}. Further, for the conditional density function $f_{X|D=d,T=t}(x|d,t)$ \begin{align*} \theta=\int m_Y(X)f_{X|D=1,T=1}(x|1,1)dx=\int m_Y(X)\frac{p_{D=1,T=1}(x)}{p_{DT}}f_X(x)dx=\int m_Y(X)\frac{q_{CS;D=1,T=1}(x)}{q_{CS;DT}}f_X(x)dx. \end{align*} For cases (CS-4) and (CS-5) the treatment group effect is identified. \subsubsection{Semiparametric efficiency bounds} \begin{theorem}[Semiparametric efficiency bounds CS]\label{thm:effcs} Suppose that Assumptions \ref{ass:dgpcs} and \ref{ass:idcs} hold. Then under each of the settings in (CS-1)-(CS-5) the efficient influence function is given by \begin{align*} \psi^{*}_{CS}(W;\theta)=\frac{q_{CS;D=1,T=1}(X)}{q_{CS;DT}}\psi^{*a}_{CS}(W)+\psi^{*b}_{CS}(W)\left(m_Y(X)-\theta\right) \end{align*} where \begin{align*} \psi^{*a}_{CS}(W)&=\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\frac{G_{d,t}}{q_{CS;D=d,T=t}(X)}\left(Y-m_Y(d,t,X)\right) \end{align*} and $\psi^{*b}_{CS-1}(W)=\frac{DT}{p_{DT}}$, $\psi^{*b}_{CS-2}(W)=\frac{Dp_T(X)+p_D(X)T-p_D(X)p_T(X)}{p_{DT}}$, $\psi^{*b}_{CS-3}(W)=\frac{T(D-p_D(1,X))+p_D(1,X)p_T}{p_{D}(1)p_T}$, $\psi^{*b}_{CS-4}(W)=\frac{D}{p_D}$ and $\psi^{*b}_{CS-5}(W)=1$. The semiparametric efficiency bound for settings (CS-1) to (CS-5) is $\mathbb{E}\left[\psi^{*}_{CS}(W;\theta)^2\right]$. \end{theorem} \textit{Proof:} see Appendix \ref{app:effcs}.\\ Theorem \ref{thm:effcs} is our first main result. Under each of the settings (CS-1)-(CS-5) an influence function with a different adjustment term $\psi^{*b}_{CS}(W)$ is derived implying different efficiency bounds for the different settings. An important implication is summarized in Corollary \ref{cor:releffcs1}. \begin{corollary}\label{cor:releffcs1} In terms of the asymptotic variance bound, the value of knowing that one of the assumed relations of $D$, $T$ and $X$ in (CS-2) to (CS-5) holds relative to (CS-1) is given by \begin{align*} \Delta_{CS-1,CS-2}&=\mathbb{E}\left[\frac{p_D(X)^2p_T(X)^2}{p_{DT}^2}\left(m_Y(X)-\theta\right)^2\left(\frac{1}{p_D(X)p_T(X)}-\frac{1}{p_D(X)}-\frac{1}{p_T(X)}+1\right)\right]\\ \Delta_{CS-1,CS-3}&=\mathbb{E}\left[\frac{p_D(1,X)^2}{p_{D}(1)^2}\left(m_Y(X)-\theta\right)^2\left(\frac{1}{p_T}-1\right)\right]\\ \Delta_{CS-1,CS-4}&=\mathbb{E}\left[\frac{p_D(X)^2}{p_{D}^2}\left(m_Y(X)-\theta\right)^2\frac{1}{p_D(X)}\left(\frac{1}{p_T}-1\right)\right]\\ \Delta_{CS-1,CS-5}&=\mathbb{E}\left[\left(m_Y(X)-\theta\right)^2\left(\frac{1}{p_Dp_T}-1\right)\right]. \end{align*} \end{corollary} \textit{Proof:} see Appendix \ref{app:releffcs1}.\\ Unambiguously, $\Delta_{CS-1,CS-2},\Delta_{CS-1,CS-3},\Delta_{CS-1,CS-4},\Delta_{CS-1,CS-5}>0$. Hence, knowing that one of the assumptions (CS-2)-(CS-5) is true results in lower efficiency bounds compared to not making any assumptions (CS-1) about the relation between $D$, $T$ and $X$. Since (CS-2) and (CS-3) contain (CS-4) and (CS-4) contains (CS-5) similar results can be shown for these cases. To conclude, more restrictive assumptions about the relation between $D$, $T$ and $X$ imply lower efficiency bounds. This hints at a trade-off regarding the robustness with respect to the model assumptions imposed and the variance lower bound that is asymptotically achievable. \subsection{Panel data} \begin{assumption}[Data-generating process PA]\label{ass:dgppa} (i) The i.i.d. sample of a two-period panel with $W=(Y(0),Y(1),D,X)$ and observations $i=1,...,N$ is observed; (ii) The distribution $F_{W}(w)$ exists. \end{assumption} Assumption \ref{ass:dgppa} describes the DGP when panel data is available. In contrast to Assumption \ref{ass:dgpcs}, the sample is not merged and contains $i=1,...,N$ unique individuals. Also the outcomes $Y(0)$ and $Y(1)$ are directly observable in the sample for one observation and thus do not need to be inferred.\\ For panel data we discriminate two settings (PA-1) and (PA-2) that describe different assumption sets on the relation between $D$ and $X$. Since we do not need a time indicator to describe the sample, the analysis is limited to the relation between $D$ and $X$. In (PA-1) we do not make any further assumptions. It is the fully robust setting. \begin{assumption}[Relation of $D$ and $X$]\label{ass:reldxpa} The variables $D$ and $X$ are assumed to be independent, $D\perp X$ (PA-2). \end{assumption} (PA-2) is more restrictive in the sense that imbalances between the treatment and the control group are ruled out. \subsubsection{Identification} \begin{assumption}[Identification PA]\label{ass:idpa} For any $d,t\in\{0,1\}$ and $x\in\mathcal{X}$, \begin{itemize} \item[(i)] (Observational Rule) For each observation $i$, the outcomes\\ $Y_i(t)=D_iY_i^1(t)+(1-D_i)Y_i^0(t)$ are observed; \item[(ii)] (Common Support) The propensity score $p_{D}(x)$ is bounded away from zero; \item[(iii)] (No Anticipation) $\mathbb{E}\left[Y^1(0)-Y^0(0)|D=1,X=x\right]=0$; \item[(iv)] (Conditional Common Trends) \begin{align*} \mathbb{E}\left[Y^0(1)-Y^0(0)|D=0,X=x\right]=\mathbb{E}\left[Y^0(1)-Y^0(0)|D=1,X=x\right]. \end{align*} \end{itemize} \end{assumption} Similarly to Assumption \ref{ass:idcs}, Assumption \ref{ass:idpa} ensures the identification of $\theta$ when panel data is available. The Observational Rule guarantees that for both outcomes $Y(0)$ and $Y(1)$ an observation cannot be part of the treatment and the control group at the same time. The other assumptions are adapted in a straightforward manner from the cross-sectional setting.\\ Again we let $q_{PA;D=d}(X)$ denote the conditional probability function $p_{D=d}(X)$ under either (PA-1) or (PA-2). \begin{lemma}\label{lm:idpa} Under Assumptions \ref{ass:dgppa} and \ref{ass:idpa} the parameter $\theta=\mathbb{E}\left[Y^1(1)-Y^0(1)|D=1\right]$ is identified as $\mathbb{E}\left[m_{\Delta Y}(X)\frac{q_{PA;D}(X)}{p_{D}}\right]$. \end{lemma} \textit{Proof:} The proof follows similar to Lemma \ref{lm:idcs}.\\ In the panel the treatment group effect is identified. \subsubsection{Efficiency bounds} \begin{theorem}[Semiparametric efficiency bounds PA]\label{thm:effpa} Suppose that Assumptions \ref{ass:dgppa} and \ref{ass:idpa} hold. Then under each of the settings (PA-1) and (PA-2) the efficient influence function is given by \begin{align*} \psi^{*}_{PA}(W;\theta)=\frac{q_{PA;D=1}(X)}{p_D}\psi^{*a}_{PA}(W)+\psi^{*b}_{PA}(W)\left(m_{\Delta Y}(X)-\theta\right) \end{align*} where \begin{align*} \psi^{*a}_{PA}(W)&=\frac{D}{q_{PA;D=1}(X)}\left(Y(1)-Y(0)-m_{\Delta Y}(1,X)\right)-\frac{1-D}{q_{PA;D=0}(X)}\left(Y(1)-Y(0)-m_{\Delta Y}(0,X)\right) \end{align*} and $\psi^{*b}_{PA-1}(W)=\frac{D}{p_{D}}$ and $\psi^{*b}_{PA-2}(W)=1$. The semiparametric efficiency bound for settings (PA-1) and (PA-2) is $\mathbb{E}\left[\psi^{*}_{PA}(W;\theta)^2\right]$. \end{theorem} \textit{Proof:} see Appendix \ref{app:effpa}.\\ Theorem \ref{thm:effpa} is our second main result. Under (PA-1) and (PA-2) an influence function with a different adjustment term $\psi^{*b}_{PA}(W)$ is derived implying different efficiency bounds for (PA-1) and (PA-2). An important implication is summarized in Corollary \ref{cor:releffpa1}. \begin{corollary}\label{cor:releffpa1} In terms of the asymptotic variance bound, the value of knowing that $D\perp X$ is given by \begin{align*} \Delta_{PA-1,PA-2}=\mathbb{E}\left[\left(m_{\Delta Y}(X)-\theta\right)^2\left(\frac{1}{p_D}-1\right)\right] \end{align*} \end{corollary} \textit{Proof:} The proof follows similarly to Corollary \ref{cor:releffcs1}.\\ Again, since $\Delta_{PA-1,PA-2}>0$, the more restrictive assumption (PA-2) is associated with a lower efficiency bound. Hence, the robustness-efficiency trade-off also materializes for panel data.\\ Having derived results for both cross-sectional and panel data, it might be of interest to compare the variance lower bounds under both sampling schemes. Corollary \ref{cor:releffcspa} summarizes the implications of Theorems \ref{thm:effcs} and \ref{thm:effpa} for the relative efficiency between panel and cross-sectional data. \begin{corollary}\label{cor:releffcspa} In terms of the asymptotic variance bound, the value of knowing the panel structure under no further assumptions is given by \begin{align*} \Delta_{CS-1,PA-1}=\mathbb{E}\left[\frac{p_D(X)^2}{p_D^2}\left(\sum_{d=0}^1\frac{\text{Var}(Y(1)+Y(0)|D=d,X)}{p_{D=d}(X)}+\frac{(m_{\Delta Y}(X)-\theta)^2}{p_D(X)}\right)\right]. \end{align*} The minimum value of knowing the panel structure is given by \begin{align*} \Delta_{CS-5,PA-1}=\mathbb{E}\left[\sum_{d=0}^1\frac{\text{Var}(Y(1)+Y(0)|D=d,X)}{p_{D=d}}+(m_{\Delta Y}(X)-\theta)^2\left(1-\frac{1}{p_D}\right)\right]. \end{align*} \end{corollary} \textit{Proof:} see Appendix \ref{app:releffcspa}.\\ Since $\Delta_{CS-1,PA-1}>0$, under no further condition on the relation between $D$, $T$ and $X$ the first result of the corollary shows that knowing the panel structure generally reduces the variance lower bound that is asymptotically achievable under comparable assumptions. This is an intuitive result because the information in a panel is unambiguously richer. $\Delta_{CS-1,PA-1}$ can therefore also be seen as the gain from the potentially more costly panel sampling scheme. Since the difference $\Delta Y$ is observed with panel data, the variance lower bound only contains propensity score reweighted conditional variances. For cross-sectional data the conditional variances are additionally reweighted by $p_{T=t}$ resulting in an efficiency loss. Further, notice that the gain from observing the panel $\Delta_{CS-1,PA-1}$ is higher (lower) when the correlation between $Y(0)$ and $Y(1)$ is positive (negative). This can be explained by the fact that the variance of $\Delta Y$ is lowest when $Y(0)$ and $Y(1)$ are positively correlated. Hence, the panel becomes more valuable when the observed difference $\Delta Y$ is less volatile. The second result of the corollary assesses the value of making assumptions relative to having access to panel versus cross-sectional data. It is hypothesized that the researcher knows that (CS-5) is correct and has panel data available where he uses the stronger than necessary setting (PA-1). From Theorem \ref{thm:effcs} and Corollary \ref{cor:releffcs1} we know that among all settings (CS-1)-(CS-5) the minimum variance lower bound with cross-sectional data is achieved when making assumption (CS-5). From Theorem \ref{thm:effpa} and Corollary \ref{cor:releffpa1} we know that among (PA-1) and (PA-2) the maximum variance lower bound with panel data is achieved when making assumption (PA-1). The difference among these settings when panel data becomes available $\Delta_{CS-5,PA-1}$ thus measures the value of making assumptions relative to having access to better data. Notice that $\Delta_{CS-5,PA-1}\lesseqgtr 0$. Hence, having access to panel data does not necessarily lead to lower efficiency bounds. Rather, the corollary shows the importance of making adequate assumptions on the relation between $D$, ($T$) and $X$. \section{Estimation and Inference}\label{sec:estinf} \subsection{High-dimensional data and important building blocks} Denote by $\eta$ a set of nuisance parameters that are generally unknown and have to be estimated in a first stage. From Section \ref{sec:ideff}, $\eta$ consists of functions of $X$. By Assumptions \ref{ass:dgpcs} and \ref{ass:dgppa} the results of Section \ref{sec:ideff} only hold when the dimension of the covariates $\lambda_X$ is fixed. For estimation and inference we relax this condition and assume that $X\in\mathbb{R}^{\lambda_X}$ and potentially $\lambda_X\rightarrow\infty$ when $N\rightarrow\infty$. Conditions in the next subsection will describe the concrete growth rates of $\lambda_X$ in relation to $N$.\\ Let $\psi$ be a function of the observed variables $W$ and the nuisance parameters. $W$ contains the generic outcome variable $\tilde{Y}$ and $\eta$ contains some projection on $\tilde{Y}$ denoted by $m_{\tilde{Y}}(\cdot)$. In contrast to Section \ref{sec:ideff}, we now explicitly indicate that the function $\psi$ depends on $\eta$ and consider $\psi$ to be of the general form \begin{align*} \psi(W,\eta;\theta)=\psi(W,\eta)-\psi^b(W,\eta)\theta=\frac{q_{1}(X)}{q_1}\psi^a(W,\eta)+\psi^b(W,\eta)(m_{\tilde{Y}}(X)-\theta) \end{align*} In particular, the function $\psi(W,\eta)$ is a sum over the index $\tau$ of terms in the form \begin{align*} \frac{q_1(X)}{q_1}\frac{G_{\tau}}{q_{G_\tau}(X)}\left(\tilde{Y}-m_{\tilde{Y}}(G_{\tau}=1,X)\right)+\psi^b(W,\eta)m_{\tilde{Y}}\left(G_{\tau}=1,X\right) \end{align*} where $q_{G_{\tau}}(x)=Pr(G_{\tau}=1|X=x)$ under some specific assumption on the relation between $G_{\tau}$ and $X$. $q_1(X)$ and $q_1$ are shorthand symbols for $q_{G_{\tau}}(X)$ and $q_{G_{\tau}}$ with $\tau=(1,1)$ in the cross-sectional and $\tau=1$ in the panel case. We assume that $\mathbb{E}\left[\psi(W,\eta;\theta)\right]=0$ such that $\theta$ is identified as \begin{align*} \theta=\frac{\mathbb{E}\left[\frac{q_1(X)}{q_1}\psi^a(W,\eta)+\psi^b(W,\eta)m_{\tilde{Y}}(X)\right]}{\mathbb{E}\left[\psi^b(W,\eta)\right]}. \end{align*} A plug-in estimator uses the ratio of sample averages with estimated nuisance parameters. Following the suggestions of \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017} we apply a cross-fitting algorithm for the nuisance parameter estimation step in order to guarantee that the resulting estimators of $\psi(W,\eta)$ and $\psi^b(W,\eta)$ consist of independent observations. The details of the estimation strategy are outlined in the algorithm below. \begin{figure}[h] \centering \onehalfspacing \fbox{\begin{minipage}{0.95\textwidth} \onehalfspacing \textbf{Cross-fitting algorithm:}\\~\\ Suppose that the set of random variables $W$ can be indexed by $i$ such that the sample is described by $W_{i}$ for $i=1,...,N$. Randomly split the sample in $K$ equal subsamples of size $n=\frac{N}{K}$. For each of the subsamples with index $k=1,...,K$ define the set of sample indices in subsample $k$ by $\mathcal{I}^k$ and the set of sample indices not in $k$ by $\mathcal{I}^{-k}$. Then a cross-fitted estimator $\hat{\theta}$ is obtained by the following procedure.\\ \textbf{for} $k=1$ \textbf{to} $K$\textbf{:} \begin{enumerate} \item Estimate all nuisance parameters $\eta$ using $W_{i\in\mathcal{I}^{-k}}$ and define these estimators as $\hat{\eta}_{-k}$. \item Use $W_{i\in\mathcal{I}^k}$ to obtain $\frac{1}{n}\sum_{i\in\mathcal{I}^k}^n\psi(W_i,\hat{\eta}_{-k})$ and $\frac{1}{n}\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})$. \end{enumerate} \textbf{endfor.}\\ Finally, construct the estimator $\hat{\theta}=\frac{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi(W_i,\hat{\eta}_{-k})}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})}$. \end{minipage}} \end{figure} Our conditions on the first stage nuisance parameter convergence rates should cover a wide range of estimators. We therefore rely on $L_2$ convergence rates. To reduce the notational burden, generically write $L_2$-rates for cross-fitted nuisance parameters as $\epsilon_{\eta}=\left\lVert\hat{\eta}_{-k}-\eta\right\rVert_2$. \subsection{Asymptotic results} In order to derive asymptotic results for the cross-fitting estimator described in the previous subsection, we have to make several assumptions. \begin{assumption}[Existence of higher-order moments]\label{ass:estinfy} (i) For $r\in\mathbb{N}$ the first $r\geq 2$ moments of $\tilde{Y}$ exist; (ii) For all $x\in\mathcal{X}$, $\tau\in\mathcal{T}$, the conditional variance $\text{Var}\left(\tilde{Y}|G_{\tau}=1,X=x\right)$ exists. \end{assumption} Notice that Assumption \ref{ass:estinfy} at least requires the existence of the second moment of the outcome. We will later show that for some difference-in-differences estimators the existence of higher-order moments interacts with the requirements of first stage convergence conditions. \begin{assumption}[Boundedness of propensity scores]\label{ass:estinfp} For all $\tau\in\mathcal{T}$, conditional probabilities and their estimators obey $0<\inf_{x\in\mathcal{X}}\lvert q_{G_{\tau}}(x)\rvert<\sup_{x\in\mathcal{X}}\lvert q_{G_{\tau}}(x)\rvert<1$ and $0<\inf_{x\in\mathcal{X}}\lvert \hat{q}_{G_{\tau}}(x)_{-k}\rvert<\sup_{x\in\mathcal{X}}\lvert \hat{q}_{G_{\tau}}(x)_{-k}\rvert<1$. \end{assumption} Assumption \ref{ass:estinfp} is stronger than the usual Common Support conditions made in Assumptions \ref{ass:idcs} and \ref{ass:idpa} to obtain the identification results in Section \ref{sec:ideff}. All conditional probabilities and their estimators are strictly bounded away from zero and one. The assumption precludes using for example the Linear Probability Model or the linear Lasso for estimating the propensity scores. \begin{assumption}[Behaviour of adjustment term]\label{ass:estinfb} The term $\psi^b(W,\eta)$ obeys, \begin{itemize} \item[(i)] $\mathbb{E}\left[\psi^b(W,\eta)|X\right]=\frac{q_1(X)}{q_1}$; \item[(ii)] $0<\inf\lvert\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)\rvert\leq C$, $0<\inf\lvert \frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})\rvert\leq C$; \item[(iii)] for all $\tau\in\mathcal{T}$, $\left\lVert\frac{G_{\tau}}{q_1}\frac{q_1(X)}{q_{G_{\tau}}(X)}-\psi^b(W,\eta)\right\rVert_{\infty}\leq C$. \end{itemize} \end{assumption} Assumption \ref{ass:estinfb} contains some further regularity conditions on the behaviour of the term $\psi^b(W,\eta)$ that are easily satisfied for the difference-in-differences estimators to be considered. Condition (i) is in principle redundant because if it would not hold then $\mathbb{E}\left[\psi(W,\eta;\theta)\right]=0$ would not be satisfied. We keep it, however, to remind us that the term cannot be of arbitrary form. Since the cross-fitted estimator $\hat{\theta}$ involves dividing by the sample plug-in average of $\psi^b(W,\eta)$, conditions (ii) are needed to guarantee that $\hat{\theta}$ is nicely behaved. The last condition is satisfied whenever the infinum of $\psi^b(W,\eta)$ is zero. \begin{assumption}[First stage convergence conditions]\label{ass:estinfjoint} The convergence conditions \begin{itemize} \item[(i)] $\left\lVert \psi^b(W,\hat{\eta}_{-k})-\psi^b(W,\eta)\right\rVert_{2}=o_p(1)$ and $\left\lVert \psi^b(W,\hat{\eta}_{-k})-\psi^b(W,\eta)\right\rVert_{\infty}=O_p(1)$; \item[(ii)] $\mathbb{E}\left[\psi^b(W,\hat{\eta}_{-k})-\psi^b(W,\eta)\right]=o_p\left(N^{-\frac{1}{2}}\right)$; \item[(iii)] $\mathbb{E}\left[\left(\psi^b(W,\hat{\eta}_{-k})-\psi^b(W,\eta)\right)\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)\right]=o_p\left(N^{-\frac{1}{2}}\right)$ and\\ $\left\lVert\left(\psi^b(W,\hat{\eta}_{-k})-\psi^b(W,\eta)\right)m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2=o_p(1)$; \item[(iv)] $\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2=o_p(1)$ and $\left\lVert\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right\rVert_2=o_p(1)$; \item[(v)] $\left\lVert\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right\rVert_2\times\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2=o_p\left(N^{-\frac{1}{2}}\right)$ \end{itemize} are satisfied for all $\tau\in\mathcal{T}$. \end{assumption} Assumption \ref{ass:estinfjoint} comprises the required coupled convergence conditions that are at the centre of our theoretical argument. The assumption ensures that the first stage nuisance parameter estimation has no effect on the asymptotic behaviour of the cross-fitted estimator. Conditions (i)-(iii) contain the convergence requirements requirements for term $\psi^b(W,\eta)$. Notice that they are trivially satisfied whenever $\psi^b=\psi^b(W)$. Hence, when the term does not depend on nuisance parameters but just on observed variables, conditions (i)-(iii) always hold. However, some of the efficient influence functions considered for the cross-sectional case in Section \ref{sec:ideff} suggest that one may also use functions where $\psi^b=\psi^b(W,\eta)$. This represents the first extension to the work of \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017} who just consider problems of the latter type when deriving first stage convergence rate conditions. Condition (iv) requires that for all $\tau\in\mathcal{T}$ the nuisance parameters converge in $L_2$. Condition (v) requires that for all $\tau\in\mathcal{T}$ the conditional probability and the outcome nuisances jointly achieve $\sqrt{N}$-convergence. Conditions (iv) and (v) are easy-to-check conditions because they can be applied to all sorts of reweighting schemes. In particular, the specific convergence conditions for all $\tau\in\mathcal{T}$ can be directly retrieved from the conditions provided. This represents the second extension to the work of \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017} who rely on Gateaux differentiation and implied regularity conditions to derive required convergence rate for specific settings. Our theory covers all functions $\psi(W,\eta;\theta)$ where $\psi(W,\eta)$ can be written as a sum over some index $\tau$. This includes the usual parameters of interest and settings in causal econometrics such as selection-on-observables with binary or multiple treatments, difference-in-differences and instrumental variables.\\ The $L_2$ rate conditions in Assumption \ref{ass:estinfjoint} (iv) and (v) can be shown to be satisfied for many supervised machine-learning algorithms under sparsity conditions. For example \textcite{Belloni_Chernozhukov_2013} show that the predictive error of the Lasso is of order $O_p\left(\sqrt{\frac{s\log\max(\lambda_X,N)}{N}}\right)$ where $s$ is the unknown number of true coefficients in the oracle model. Assumption $\ref{ass:estinfjoint}$ (iv) and (v) then require that $\frac{s^2\log^2\max(\lambda_X,N)}{N}\rightarrow 0$ (if $s$ is the same in the propensity score and the outcome nuisance models). It follows that $\lambda_X\rightarrow\infty$ when $N\rightarrow\infty$ under the sparsity condition. Thus, the dimension of the covariates can be high in the sense that it can (slowly) grow with the sample size whenever the true model is sparse. Similar $L_2$ rate conditions can also be shown for non-linear models like Random Forests (\cite{Wager_Walther_2015}), Honest Random Forests (\cite{Wager_Athey_2018}) or forms of Deep Neural Nets (\cite{Farrell_Liang_Misra_2018}). \begin{theorem}[Estimation and inference]\label{thm:estinf} Suppose that $\psi(W,\eta)$ is a sum over the index $\tau$ of terms in the form \begin{align*} \frac{q_1(X)}{q_1}\frac{G_{\tau}}{q_{G_\tau}(X)}\left(\tilde{Y}-m_{\tilde{Y}}(G_{\tau}=1,X)\right)+\psi^b(W,\eta)m_{\tilde{Y}}\left(G_{\tau}=1,X\right), \end{align*} $\mathbb{E}\left[\psi(W,\eta;\theta)\right]=0$ and Assumptions \ref{ass:estinfy}-\ref{ass:estinfjoint} hold. Then the cross-fitted estimator $\hat{\theta}$ obeys \begin{align*} \sqrt{N}\left(\hat{\theta}-\theta\right)=\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi(W_i,\eta;\theta)+o_p(1)\overset{d}{\longrightarrow}\mathcal{N}(0,\sigma^2) \end{align*} where $\sigma^2=\mathbb{E}\left[\psi(W,\eta;\theta)^2\right]$. \end{theorem} \textit{Proof:} see Appendix \ref{app:estinf}.\\ Theorem \ref{thm:estinf} is our third main result. The cross-fitted estimator is asymptotically normal with influence function $\psi(W,\eta;\theta)$. Notice that the result implies that whenever we use the sample plug-in estimator of the efficient influence function the cross-fitted estimator asymptotically attains the low-dimensional variance lower bound if Assumptions \ref{ass:estinfy}-\ref{ass:estinfjoint} are satisfied. The following subsections contain several corollaries of Theorem \ref{thm:estinf} that summarize the convergence conditions and asymptotic behaviour of different cross-fitted semiparametric difference-in-differences estimators. \subsection{Plug-in estimators} Corollary \ref{cor:estcs} summarizes the implications of Theorem \ref{thm:estinf} for cross-sectional difference-in-differences estimators in settings (CS-1)-(CS-5) that use the efficient influence functions derived in Theorem \ref{thm:effcs}. \begin{corollary}\label{cor:estcs} Suppose that Assumptions \ref{ass:dgpcs}, \ref{ass:idcs}, \ref{ass:estinfy} and \ref{ass:estinfp} hold then \begin{itemize} \item[(a)] under setting (CS-1), $r=2$ and assuming that for all $d,t\in\{0,1\}$ $\epsilon_{p_{D=d,T=t}(X)}=o_p(1)$ and for $(d,t)\in\{(0,1),(1,0),(0,0)\}$ $\epsilon_{m_Y(d,t,X)}=o_p(1)$ and \begin{align*} \left(\epsilon_{p_{D=1,T=1}(X)}+\epsilon_{p_{D=d,T=t}(X)}\right)\times\epsilon_{m_Y(d,t,X)}=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} \item[(b)] under the condition (CS-2) in Assumption \ref{ass:reldtxcs}, $r>2$ and assuming that $\epsilon_{p_D(X)}=o_p(1)$, $\epsilon_{p_T(X)}=o_p(1)$ and $\epsilon_{m_Y(d,t,X)}=o_p(1)$, $\left\lVert\hat{m}_Y(d,t,X)_{-k}-m_Y(d,t,X)\right\rVert_r=O_p(1)$ for all $d,t\in\{0,1\}$ and \begin{align*} &\epsilon_{p_D(X)}\times\epsilon_{p_T(X)}=o_p\left(N^{-\frac{1}{2}\frac{r}{r-1}}\right),\quad \epsilon_{p_D(X)}\times\epsilon_{m_Y(0,1,X)}=o_p\left(N^{-\frac{1}{2}}\right),\\ &\epsilon_{p_T(X)}\times\epsilon_{m_Y(1,0,X)}=o_p\left(N^{-\frac{1}{2}}\right),\quad \left(\epsilon_{p_D(X)}+\epsilon_{p_T(X)}\right)\times\epsilon_{m_Y(0,0,X)}=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} \item[(c)] under the condition (CS-3) in Assumption \ref{ass:reldtxcs}, $r>2$ and assuming that $\epsilon_{p_D(1,X)}=o_p(1)$, $\epsilon_{p_D(0,X)}=o_p(1)$ and $\epsilon_{m_Y(d,t,X)}=o_p(1)$ for all $d,t\in\{0,1\}$ and \begin{align*} &\epsilon_{p_D(1,X)}\times\epsilon_{m_Y(0,1,X)}=o_p\left(N^{-\frac{1}{2}}\right),\quad \left(\epsilon_{p_D(1,X)}+\epsilon_{p_D(0,X)}\right)\times\left(\epsilon_{m_Y(1,0,X)}+\epsilon_{m_Y(0,0,X)}\right)=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} \item[(d)] under the condition (CS-4) in Assumption \ref{ass:reldtxcs}, $r=2$ and assuming that $\epsilon_{p_D(X)}=o_p(1)$ and $\epsilon_{m_Y(d,t,X)}=o_p(1)$ for all $d,t\in\{0,1\}$ and \begin{align*} \epsilon_{p_D(X)}\times\left(\epsilon_{m_Y(0,1,X)}+\epsilon_{m_Y(0,0,X)}\right)=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} \item[(e)] under the condition (CS-5) in Assumption \ref{ass:reldtxcs}, $r=2$ and assuming that $\epsilon_{m_Y(d,t,X)}=o_p(1)$ for all $d,t\in\{0,1\}$ \end{itemize} the sample plug-in estimators implied by $\mathbb{E}\left[\psi^*_{CS}(W;\theta)\right]=0$ are efficient estimators in the sense that they obey \begin{align*} \sqrt{N}\left(\hat{\theta}-\theta\right)=\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi^*_{CS}(W_i;\theta)+o_p(1)\overset{d}{\longrightarrow}\mathcal{N}\left(0,\mathbb{E}\left[\psi^*_{CS}(W;\theta)^2\right]\right). \end{align*} \end{corollary} \textit{Proof:} see Appendix \ref{app:estcs}.\\ The joint convergence conditions are more sophisticated for settings that are less restrictive with respect to the model assumptions on the relation between $D$, $T$ and $X$ imposed. For (CS-1) we need six, for (CS-2) and (CS-3) we need five, for (CS-4) we need two and for (CS-5) we need zero joint convergence rates to be satisfied. This hints at a trade-off between the robustness of the estimators with respect to the assumption on $D$, $T$ and $X$ and the robustness of the estimators with respect to first stage nuisance parameter convergence requirements. Further, we notice that for settings (CS-2) and (CS-3) higher-order moments $r>2$ have to exist. For setting (CS-3) this is a mere regularity condition. For setting (CS-2) it has some implications on the joint convergence condition of the two propensity scores. The more moments of the outcome exist, the closer the joint convergence condition at the usual $\sqrt{N}$ condition. This implies that for a bounded outcome the condition becomes $\epsilon_{p_D(X)}\times\epsilon_{p_T(X)}=o_p\left(N^{-\frac{1}{2}}\right)$.\footnote{For most outcomes in labour market applications the outcome is bounded. The condition might, however, become relevant for distributions with `fat tails' typically present in financial econometrics.} If the researcher is only willing to assume that a second moment of $Y$ exists, this implies that for both propensity scores parametric convergence rates are needed. We additionally notice that for the experimental setting (CS-5) the efficient estimator is not the simple difference in means estimator but a residualized version of it.\\ Corollary \ref{cor:estpa} summarizes the implications of Theorem \ref{thm:estinf} for panel difference-in-differences estimators in settings (PA-1) and (PA-2) that use the efficient influence functions derived in Theorem \ref{thm:effpa}. \begin{corollary}\label{cor:estpa} Suppose that Assumptions \ref{ass:dgppa}, \ref{ass:idpa}, \ref{ass:estinfy} and \ref{ass:estinfp} hold then \begin{itemize} \item[(a)] under the condition (PA-1), $r=2$ and assuming that $\epsilon_{p_{D}(X)}=o_p(1)$, $\epsilon_{m_{\Delta Y}(0,X)}=o_p(1)$ and \begin{align*} \epsilon_{p_{D}(X)}\times\epsilon_{m_{\Delta Y}(0,X)}=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} \item[(b)] under the condition (PA-2) in Assumption \ref{ass:reldxpa}, $r=2$ and assuming that $\epsilon_{m_{\Delta Y}(d,X)}=o_p(1)$ for $d\in\{0,1\}$ \end{itemize} the sample plug-in estimators implied by $\mathbb{E}\left[\psi^*_{PA}(W;\theta)\right]=0$ are efficient estimators in the sense that they obey \begin{align*} \sqrt{N}\left(\hat{\theta}-\theta\right)=\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi^*_{PA}(W_i;\theta)+o_p(1)\overset{d}{\longrightarrow}\mathcal{N}\left(0,\mathbb{E}\left[\psi^*_{PA}(W;\theta)^2\right]\right). \end{align*} \end{corollary} \textit{Proof:} see Appendix \ref{app:estpa}.\\ Compared to the cross-sectional case, the joint convergence condition are generally weaker for (PA-1) for two reasons. Firstly, the fact that we observe $\Delta Y$ only requires one projection on the difference of the outcomes instead of the difference between two projections on each single outcome. Secondly, the efficient influence function in Theorem \ref{thm:effpa} implies that $m_{\Delta Y}(1,X)$ is redundant. This allows us to obtain efficient difference-in-differences estimators for the panel under relatively weak conditions. \subsection{Redundancy of nuisance parameters} For the cross-sectional difference-in-differences estimators considered so far some nuisance parameters are redundant. For example in setting (CS-1) for all $x\in\mathcal{X}$ we have $\sum_{d=0}^1\sum_{t=0}^1p_{D=d,T=t}(x)=1$. In principle, we could therefore infer one of the four propensity scores from the other three. A well-known problem is that if one estimates the three propensity scores without requiring that the implied four propensity scores sum to one, Assumption \ref{ass:estinfp} might be violated because the fourth, implied propensity score is not guaranteed to be strictly greater than zero. A solution to this could be to explicitly require that the four propensity scores estimators sum to one for all $x\in\mathcal{X}$. Multinomial versions of the standard Logit regression or the Logit Lasso are available. To the knowledge of the author, solutions do not exist for more sophisticated machine-learning algorithms like Random Forests or Neural Nets. However, notice that $p_{D=d,T=t}(X)=p_{D=d}(t,X)p_{T=t}(X)=p_{T=t}(d,X)p_{D=d}(X)$ and that the propensity score estimators obey $\sum_{d=0}^1\sum_{t=0}^1\hat{p}_{D=d}(t,x)_{-k}\hat{p}_{T=t}(x)_{-k}=\sum_{d=0}^1\sum_{t=0}^1\hat{p}_{T=t}(d,x)_{-k}\hat{p}_{D=d}(x)_{-k}=1$ for all $x\in\mathcal{X}$ by construction. Corollary \ref{cor:redcs1} summarizes the properties of the two implied estimators. \begin{corollary}\label{cor:redcs1} Suppose that Assumptions \ref{ass:dgpcs}, \ref{ass:idcs}, \ref{ass:estinfy} and \ref{ass:estinfp} hold then under the condition (CS-1) and $r=2$, the estimators implied by the moment condition of the score functions \begin{itemize} \item[(a)] \begin{align*} \psi^{**}_{CS-1}(W,\eta;\theta)&=\frac{p_D(1,X)p_T(X)}{p_{DT}}\sum^1_{d=0}\sum^1_{t=0}(-1)^{(d+t)}\frac{G_{d,t}}{p_{D=d}(t,X)p_{T=t}(X)}\left(Y-m_Y(d,t,X)\right)\\ &+\frac{DT}{p_{DT}}\left(m_Y(X)-\theta\right) \end{align*} under the further conditions that $\epsilon_{p_D(1,X)}=o_p(1)$, $\epsilon_{p_D(0,X)}=o_p(1)$, $\epsilon_{p_T(X)}=o_p(1)$ and for all $(d,t)\in\{(0,1),(1,0),(0,0)\}$ $\epsilon_{m_Y(d,t,X)}=o_p(1)$ and \begin{align*} &\epsilon_{p_D(1,X)}\times\epsilon_{m_Y(0,1,X)}=o_p\left(N^{-\frac{1}{2}}\right)\quad\text{and}\\ &\left(\epsilon_{p_D(1,X)}+\epsilon_{p_D(0,X)}+\epsilon_{p_T(X)}\right)\times\left(\epsilon_{m_Y(1,0,X)}+\epsilon_{m_Y(0,0,X)}\right)=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} \item[(b)] \begin{align*} \psi^{***}_{CS-1}(W,\eta;\theta)&=\frac{p_T(1,X)p_D(X)}{p_{DT}}\sum^1_{d=0}\sum^1_{t=0}(-1)^{(d+t)}\frac{G_{d,t}}{p_{T=t}(d,X)p_{D=d}(X)}\left(Y-m_Y(d,t,X)\right)\\ &+\frac{DT}{p_{DT}}\left(m_Y(X)-\theta\right) \end{align*} under the further conditions that $\epsilon_{p_T(1,X)}=o_p(1)$, $\epsilon_{p_T(0,X)}=o_p(1)$, $\epsilon_{p_D(X)}=o_p(1)$ and for all $(d,t)\in\{(0,1),(1,0),(0,0)\}$ $\epsilon_{m_Y(d,t,X)}=o_p(1)$ and \begin{align*} &\epsilon_{p_T(1,X)}\times\epsilon_{m_Y(1,0,X)}=o_p\left(N^{-\frac{1}{2}}\right)\quad\text{and}\\ &\left(\epsilon_{p_T(1,X)}+\epsilon_{p_T(0,X)}+\epsilon_{p_D(X)}\right)\times\left(\epsilon_{m_Y(0,1,X)}+\epsilon_{m_Y(0,0,X)}\right)=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} \end{itemize} obey \begin{align*} \sqrt{N}\left(\hat{\theta}-\theta\right)=\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi^*_{CS-1}(W_i;\theta)+o_p(1)\overset{d}{\longrightarrow}\mathcal{N}\left(0,\mathbb{E}\left[\psi^*_{CS-1}(W;\theta)^2\right]\right). \end{align*} \end{corollary} \textit{Proof:} The proof follows similarly to Corollary \ref{cor:estcs} (a).\\ Compared to Corollary \ref{cor:estcs} (a), the joint convergence conditions are increased from six to seven and the efficiency result is maintained. The scores rely on either $p_{D=d}(t,X)$ or $p_{T=t}(d,X)$. We therefore recommend to use $\psi^{**}_{CS-1}(W,\eta;\theta)$ when $\lvert p_T-0.5\rvert<\lvert p_D-0.5\rvert$ and $\psi^{***}_{CS-1}(W,\eta;\theta)$ otherwise.\\ The result of Corollary \ref{cor:estcs} (b)-(d) indicate that some of the outcome nuisances in settings (CS-2), (CS-3) and (CS-4) are redundant in the sense that they do not contribute to the joint convergence condition in Assumption \ref{ass:estinfjoint} (v). However, the outcome nuisances still need to converge in $L_2$ in order to satisfy Assumption \ref{ass:estinfjoint} (iv). The same applies for (CS-5) since the simple difference-in-means estimator represents an alternative estimator that does not rely on any first stage convergence conditions. We do not provide a separate result for case (CS-5). However, it is easy to see that the implied estimator without any outcome nuisances is just the difference-in-means estimator. Notice that as long as $Y$ is not independent of $X$ the difference-in-means estimator is \textit{not} an efficient estimator in a semiparametric sense. This has some practical relevance as in difference-in-differences the credibility of the design is often assessed by using placebo tests in some pre-periods without relying on covariates. Not rejecting the null might, however, just due to a higher than necessary standard error when using the difference-in-means estimator. The same applies for setting (PA-2). Since extensions for (CS-3) are also trivial, we focus on settings (CS-2) and (CS-4). Corollaries \ref{cor:redcs2} and \ref{cor:redcs4} summarize the implications when scores are used that do not contain some of the outcome nuisances. Generally, the implied cross-fitted sample plug-in estimators do not attain the variance lower bound but have otherwise desirable asymptotic properties under weaker conditions. This hints at another trade-off between the robustness of the estimator towards first stage convergence requirements and semiparametric efficiency. \begin{corollary}\label{cor:redcs2} Suppose that Assumptions \ref{ass:dgpcs}, \ref{ass:idcs}, \ref{ass:estinfy} and \ref{ass:estinfp} hold then under the condition (CS-2) in Assumption \ref{ass:reldtxcs}, $r=2$ and assuming that $\epsilon_{p_D(X)}=o_p(1)$, $\epsilon_{p_T(X)}=o_p(1)$ and $\epsilon_{m_Y(d,t,X)}=o_p(1)$ for all $(d,t)\in\{(0,1),(1,0),(0,0)\}$ and \begin{align*} &\epsilon_{p_D(X)}\times\epsilon_{m_Y(0,1,X)}=o_p\left(N^{-\frac{1}{2}}\right),\quad\epsilon_{p_T(X)}\times\epsilon_{m_Y(1,0,X)}=o_p\left(N^{-\frac{1}{2}}\right),\\ &\left(\epsilon_{p_D(X)}+\epsilon_{p_T(X)}\right)\times\epsilon_{m_Y(0,0,X)}=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} the sample plug-in estimator using the score $\psi'_{CS-2}(W;\theta)=\frac{p_D(X)p_T(X)}{p_{DT}}\psi^{*a}_{CS-2}+\frac{DT}{p_{DT}}\left(m_Y(X)-\theta\right)$ obeys \begin{align*} \sqrt{N}\left(\hat{\theta}-\theta\right)=\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi'_{CS-2}(W_i;\theta)+o_p(1)\overset{d}{\longrightarrow}\mathcal{N}\left(0,\mathbb{E}\left[\psi'_{CS-2}(W;\theta)^2\right]\right) \end{align*} with efficiency loss $\mathbb{E}\left[\psi'_{CS-2}(W;\theta)^2\right]-\mathbb{E}\left[\psi^{*}_{CS-2}(W;\theta)^2\right]=\Delta_{CS-1,CS-2}.$ \end{corollary} \textit{Proof:} see Appendix \ref{app:redcs2}.\\ Corollary \ref{cor:redcs2} shows that for (CS-2) one may in principle get rid off the restrictive existence of higher-order moments requirement and the $L_2$ convergence condition for $m_Y(1,1,x)$ in Corollary \ref{cor:estcs} (b). However, this results in an efficiency loss. Notice that from Corollary \ref{cor:releffcs1} $\Delta_{CS-1,CS-2}$ represents the value of knowing that (CS-2) in Assumption \ref{ass:reldtxcs} is true relative to not making any assumptions. Hence, when using score $\psi'_{CS-2}(W;\theta)$ all efficiency gains from the stronger setting (CS-2) relative to (CS-1) are exchanged for relatively weak first stage convergence conditions. \begin{corollary}\label{cor:redcs4} Suppose that Assumptions \ref{ass:dgpcs}, \ref{ass:idcs}, \ref{ass:estinfy} and \ref{ass:estinfp} hold then under the condition (CS-4) in Assumption \ref{ass:reldtxcs}, $r=2$ and assuming that $\epsilon_{p_D(X)}=o_p(1)$ and $\epsilon_{m_Y(d,t,X)}=o_p(1)$ for all $(d,t)\in\{(0,1),(0,0)\}$ and \begin{align*} \epsilon_{p_D(X)}\times\left(\epsilon_{m_Y(0,1,X)}+\epsilon_{m_Y(0,0,X)}\right)=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} the sample plug-in estimator using the score \begin{align*} \psi'_{CS-4}(W;\theta)=\psi^*_{CS-4}(W;\theta)+\frac{D}{p_D}\left(\frac{T}{p_T}-1\right)m_Y(1,1,X)-\frac{D}{p_D}\left(\frac{1-T}{1-p_T}-1\right)m_Y(1,0,X) \end{align*} obeys \begin{align*} \sqrt{N}\left(\hat{\theta}-\theta\right)=\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi'_{CS-4}(W_i;\theta)+o_p(1)\overset{d}{\longrightarrow}\mathcal{N}\left(0,\mathbb{E}\left[\psi'_{CS-4}(W;\theta)^2\right]\right) \end{align*} with efficiency loss $\mathbb{E}\left[\psi'_{CS-4}(W;\theta)^2\right]-\mathbb{E}\left[\psi^{*}_{CS-4}(W;\theta)^2\right]=\mathbb{E}\left[\frac{p_D(X)^2}{p_D^2}\frac{\left(\sqrt{\frac{1-p_T}{p_T}}m_Y(1,1,X)+\sqrt{\frac{p_T}{1-p_T}}m_Y(1,0,X)\right)^2}{p_D(X)}\right].$ \end{corollary} \textit{Proof:} see Appendix \ref{app:redcs4}.\\ Corollary \ref{cor:redcs4} provides the convergence conditions when we use a score where $m_Y(1,1,x)$ and $m_Y(1,0,x)$ are redundant. This results in convergence conditions similar to those of the score $\psi^{*}_{PA-1}(W;\theta)$. However, also the efficiency loss can be relatively huge. \section{Application}\label{sec:app} To illustrate the practical relevance of the proposed method, we revisit \textcite{Angrist_Acemoglu_2001}. The paper is concerned with the theoretically ambiguous effect of increased employment protection for disabled workers on weeks worked (for more details see the paper). An empirical evaluation of the Americans with Disabilities Act reform introduced in 1991 is used to test the theory using data from the Current Population Survey (CPS), a repeated cross-section.\\ As in the paper, we define $D$ as being disabled. $Y$ are the weeks worked in the respective year. The outcome is therefore bounded between 0 and 52. In the original paper several post-reform years are considered. Since the credibility of the common trend assumption might be questionable for years well after the reform, we focus on 1992 as the post-reform period ($T=1$). As in the original paper we use the years 1988-1990 for $T=0$. All CPS data is retrieved from Joshua Angrist's data archive\footnote{\href{https://economics.mit.edu/faculty/angrist/data1/data/aceang01}{https://economics.mit.edu/faculty/angrist/data1/data/aceang01}}. We consider three different sets of covariates. An overview of the different variable sets is provided in Table \ref{tab:spec}. \begin{table}[h!] \centering \caption{Covariate specifications} \label{tab:spec} \begin{threeparttable} \begin{tabular}{lp{10cm}l} \toprule specification & covariates used & \# of covariates\\ \midrule original & sex, age, race group, education group, region & 14 \\ baseline & sex, age, race group, education group, marital status, class of worker, major industry, major occupation, state, central city MSA status & 108\\ extended & sex, age, race group, education group, marital status, class of worker, major industry, major occupation, state, central city MSA status, longest job class of worker, longest job major occupation, longest job major industry, number of employers, unemployment compensation benefit value, supplemental security income amount received, public assistance or welfare value received, social security payments received, veteran status, veterans payment income, survivor's income received, value of other income, value of workers' compensation for job related illness or injury, retirement income, health insurance group, medicare coverage, medicaid coverage, coverage by military health care & 161\\ \bottomrule \bottomrule \end{tabular} \end{threeparttable} \end{table} The variable set labelled as `original' is constructed using the covariates that are also included in the specifications of \textcite{Angrist_Acemoglu_2001}. The variable sets `baseline' and `extended' use some further covariates available from the CPS. The `baseline' specification disaggregates the region variable used in the original dataset to control for geographically different common trends. Instead, we include state dummies and dummies indicating whether the individual lives in a metropolitain area. Additional controls on marital status, class of worker, industry and occupation should help to control for non-parallel trends between disabled and non-disabled. For example, if people who report a disability self-select more often into a certain sector or industry then any underlying structural change in this sector needs to be controlled for in order to guarantee the common trend assumption to hold. This might be a point that is of more general interest in empirical economics. In labour market applications the common trend is often valid only after conditioning on variables that are available at different aggregation levels e.g. geographic or sector dummies. The `extended' specification includes further covariates on employment history, social welfare payments and health insurance status. \begin{table}[b!] \centering \caption{Subsample sizes} \label{tab:sub} \begin{threeparttable} \begin{tabular}{llllllllll} \toprule $N(0)$ & $N(1)$ & $p_D$ & $p_T$ & $p_D(1)$ & $p_D(0)$ & $p_{D=1,T=1}$ & $p_{D=0,T=1}$ & $p_{D=1,T=0}$ & $p_{D=0,T=0}$\\ \midrule 206058 & 70069 & 6.19 \% & 25.38 \% & 6.49 \% & 6.08 \% & 1.65 \% & 23.73 \% & 4.54 \% & 70.08 \% \\ \bottomrule \bottomrule \end{tabular} \end{threeparttable} \end{table} Table \ref{tab:sub} shows that the merged sample consists of 276127 observations (for all specifications) and is highly imbalanced. We observe that the share of $D$ varies between $T=0$ and $T=1$. Given the large sample size, this indicates that assumptions (CS-4) and (CS-5) are likely to be violated. \subsection{Estimators considered} We consider different estimators using the different score functions outlined in Section \ref{sec:estinf} with different first stage estimators.\\ Our preferred estimator is an Ensemble Learner that weights Lasso and Random Forest predictions by using out-of-sample MSE optimal weights. For the Lasso we allow for polynomials up to order four and all two way interactions. The Random Forest is an ensemble of regression trees and therefore implicitly contains higher-order terms. For both estimators we use the default settings in the \textit{glmnet} and \textit{ranger} R{}-packages. Using an ensemble of machine-learning estimators has several important advantages. Firstly, Lasso and Random Forests are designed for different DGPs. Whereas the Lasso allows for some form of smoothing, we expect a tree-based estimator to work well with strong non-linearities. Secondly, the Ensemble gives more weight to the single predictor that works best and therefore should be less dependent on the particular tuning parameter choices of the Lasso and the Random Forest. Thirdly, the strong imbalances in our sample indicate that using single propensity score estimators might result in extreme weights. By combining two predictors the likelihood of generating extreme weights is reduced.\\ We compare the Ensemble Learner to the single predictors Lasso and Random Forest. Due to computational constraints, we restrict this analysis to the `baseline' set of covariates. Further, we compare the Ensemble to parametric models for the propensity scores and the outcome nuisances. For the propensity scores we use Logit and for the outcome nuisances standard linear regression. We include covariates in levels. As a benchmark, we also consider \posscite{Abadie_2005} Inverse Probability Weighting (IPW) difference-in-differences estimator with Logit regression for the propensity score. \subsection{Placebo tests} As usual in the difference-in-differences literature, we run some placebo experiments for period 1988/89. \begin{table}[h!] \centering \caption{Placebo tests 1988/89 under (CS-5)} \label{tab:plcebocs5} \begin{threeparttable} \begin{tabular}{lllll} & $\eta$ estimator & original & baseline & extended\\ \cmidrule(lr){2-2} \cmidrule(lr){3-5} $\psi^*_{CS-5}(W,\eta;\theta)$ & Ensemble & -1.410 & -0.956 & -0.600 \\ & & (0.464) & (0.344) & (0.218)\\ $\psi^{*}_{CS-5}(W,\eta;\theta)$ & Linear & -1.141 & -1.077 & -1.257\\ & & (0.551) & (0.506) & (0.690) \\ Mean differences & & -0.801 & -0.801 & -0.801 \\ & & (0.652) & (0.652) & (0.652) \\ \bottomrule \bottomrule \end{tabular} \begin{tablenotes} \item Results for estimators with the Ensemble were obtained using the cross-fitting procedure in Section \ref{sec:estinf} with $K=2$. The Ensemble Learner comprises Lasso and Random Forest. For Lasso the penalty term was chosen such that the cross-validation criterion was minimized. The Ensemble weights were chosen by minimizing out-of-sample MSE. Results for estimators with Linear regression were obtained by using the sample plug-in estimator without cross-fitting. The sample for the placebo tests contains $N=135174$ observations. Standard errors are in parenthesis. \end{tablenotes} \end{threeparttable} \end{table} Table \ref{tab:plcebocs5} shows the results under (CS-5). Notice that the simple differences-in-means estimator does not hint at a violation of the specification that does not account for any imbalances. This is not because the effect is economically small but due to the comparatively high standard error. If the efficient score $\psi^*_{CS-5}(W,\eta;\theta)$ is used, the point estimators are in the same range but -- in line with theory in Section \ref{sec:ideff} -- the standard errors are substantially decreased leading to significant results. This shows that using the efficient score function rather than the simple mean differences estimator may lead to opposite conclusions regarding the credibility of the design.\footnote{We notice that the asymptotic standard error might, however, be a more accurate approximation of the finite sample standard error in case of the difference-in-means estimator.} The result also highlights the importance of discriminating the two roles covariates can have in semiparametric difference-in-differences estimation. First of all, they may be included to improve the reliability of the Conditional Common Trends (see Assumptions \ref{ass:idcs} and \ref{ass:idpa}). Second of all, under some assumptions, they should be included to improve on the efficiency of the derived estimator. Further, we notice that the standard error for the Ensemble Learner decreases when more covariates are added to the model, whereas the standard error when using the linear model increases for the `extended' specification. The standard errors are also generally larger. This indicates that the Ensemble Learner is more effective in predicting the conditional outcomes leading to a stronger variance reduction of the residualized estimator.\\ From these results we conclude that incorporating covariates seems to be necessary in our example. Notice that due to the strong imbalances in our application, especially propensity scores for $D=1$, $T=1$ are hard to predict. Results of placebo tests for our estimators are reported in Table \ref{tab:placebocs}. For estimators that rely on the Ensemble Learner none of the different specifications used hint at a violation of the conditional common trend assumption. Moreover, the standard errors are reduced when a higher number of covariates is included, indicating that the Ensemble is effective in extracting the additional information. When covariates in `baseline' and `extended' are used, estimators under (CS-2) and (CS-4) show substantially decreased standard errors compared to estimators that rely on (CS-1). This is in line with the theoretical argument in Corollary \ref{cor:releffcs1}. When using the same scores with the same set of variables most estimators that rely on parametric models hint at a potential violation of the conditional common trend assumption. With the exception of IPW, estimators implied by scores that only rely on the propensity scores $p_D(X)$ and $p_T(X)$ ((CS-2), (CS-4)) generally do not indicate a violation of the conditional common trend assumption. The same can be observed for the single nuisance predictors. We conclude that scores that rely on sophisticated convergence conditions are less robust to potential violations of these conditions in practice. However, it seems that the efficiency-estimation trade-off seems to be less of a concern when high-quality predictors like the Ensemble are used. Moreover, even when using a parametric model and a large number of covariates the double robust scores give results whereas IPW explodes for the `extended' specification. \begin{table}[p] \centering \caption{Placebo tests 1988/89 under (CS-1)-(CS-4)} \label{tab:placebocs} \begin{threeparttable} \begin{tabular}{lllllllll} & \multicolumn{3}{c}{Ensemble} & \multicolumn{3}{c}{Linear/Logit regression} & Forest & Lasso \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} & original & baseline & extended & original & baseline & extended & baseline & baseline \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} $\psi^*_{CS-1}(W,\eta;\theta)$ & -0.368 & 0.223 & -0.167 & -1.338 & -3.875 & -4.832 & 7.564 & -10.850 \\ & (0.487) & (0.496) & (0.457) & (0.546) & (0.435) & (0.598) & (0.857) & (0.410) \\ $\psi^{**}_{CS-1}(W,\eta;\theta)$ & -0.359 & 0.255 & -0.527 & -1.425 & -3.598 & -8.640 & 24.452 & -15.053\\ & (0.487) & (0.470) & (0.367) & (0.544) & (0.443) & (0.516) & (1.363) & (0.365) \\ $\psi^{***}_{CS-1}(W,\eta;\theta)$ & -0.390 & 0.084 & -0.352 & -1.296 & -4.033 & -6.802 & 4.205 & -0.753 \\ & (0.481) & (0.385) & (0.287) & (0.548) & (0.431) & (0.554) & (0.765) & (0.618)\\ $\psi^{*}_{CS-2}(W,\eta;\theta)$ & -0.470 & -0.022 & -0.391 & -0.977 & -0.994 & 0.849 & 9.365 & -0.772\\ & (0.477) & (0.379) & (0.294) & (0.559) & (0.510) & (0.733) & (0.875) & (0.605)\\ $\psi^{'}_{CS-2}(W,\eta;\theta)$ & -0.429 & 0.008 & -0.406 & -0.992 & -1.070 & 0.870 & 11.121 & -0.819\\ & (0.477) & (0.381) & (0.296) & (0.556) & (0.524) & (0.730) & (0.961) & (0.616)\\ $\psi^{*}_{CS-3}(W,\eta;\theta)$ & -0.317 & 0.271 & -0.484 & -1.847 & -3.817 & -10.331 & 7.275 & -14.973\\ & (0.487) & (0.474) & (0.371) & (0.528) & (0.433) & (0.481) & (0.796) & (0.364)\\ $\psi^{*}_{CS-4}(W,\eta;\theta)$ & -0.685 & -0.378 & -0.312 & -1.138 & -1.271 & -0.749 & 0.363 & -0.762\\ & (0.477) & (0.385) & (0.277) & (0.542) & (0.494) & (0.680) & (0.594) & (0.586)\\ $\psi^{'}_{CS-4}(W,\eta;\theta)$ & -0.645 & -0.416 & -0.450 & -0.832 & -1.027 & -0.328 & 0.719 & -0.413\\ & (0.620) & (0.631) & (0.627) & (0.604) & (0.602) & (0.602) & (0.603) & (0.603)\\ IPW & & & & -1.808 & -0.558 & - & & \\ & & & & (0.686) & (0.658) & - & & \\ \bottomrule \bottomrule \end{tabular} \begin{tablenotes} \item Results for estimators with the Ensemble, the Random Forest and the Lasso were obtained using the cross-fitting procedure in Section \ref{sec:estinf} with $K=2$. The Ensemble Learner comprises Lasso and Random Forest. For Lasso the penalty term was chosen such that the cross-validation criterion was minimized. The Ensemble weights were chosen by minimizing out-of-sample MSE. Results for estimators with Linear/Logit regression were obtained by using the sample plug-in estimator without cross-fitting. The Logit was used for the propensity scores and linear regression for the outcome nuisances. The sample for the placebo tests contains $N=135174$ observations. Standard errors are in parenthesis. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[p] \centering \caption{Results for (CS-1)-(CS-4)} \label{tab:resultscs} \begin{threeparttable} \begin{tabular}{lllllllll} & \multicolumn{3}{c}{Ensemble} & \multicolumn{3}{c}{Linear/Logit regression} & Forest & Lasso \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} & original & baseline & extended & original & baseline & extended & baseline & baseline \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} $\psi^*_{CS-1}(W,\eta;\theta)$ & -0.904 & -0.855 & 0.031 & 1.049 & 2.118 & 13.959 & 13.123 & 17.558 \\ & (0.376) & (0.351) & (0.273) & (0.458) & (0.465) & (0.782) & (0.691) & (0.795) \\ $\psi^{**}_{CS-1}(W,\eta;\theta)$ & -0.828 & -0.689 & 0.017 & 0.785 & 0.928 & 10.466 & 14.793 & 11.546\\ & (0.377) & (0.351) & (0.209) & (0.453) & (0.442) & (0.720) & (0.720) & (0.680)\\ $\psi^{***}_{CS-1}(W,\eta;\theta)$ & -0.837 & -0.356 & -0.096 & 1.144 & 0.978 & 6.457 & 23.419 & -0.120\\ & (0.375) & (0.318) & (0.184) & (0.459) & (0.443) & (0.650) & (0.887) & (0.478)\\ $\psi^{*}_{CS-2}(W,\eta;\theta)$ & -0.856 & -0.422 & -0.169 & -0.456 & -0.845 & -1.746 & -11.963 & -0.987\\ & (0.365) & (0.307) & (0.183) & (0.434) & (0.429) & (0.542) & (0.432) & (0.491)\\ $\psi^{'}_{CS-2}(W,\eta;\theta)$ & -0.792 & -0.407 & -0.123 & -0.453 & -0.819 & -1.695 & -7.549 & -0.895\\ & (0.364) & (0.309) & (0.185) & (0.432) & (0.411) & (0.517) & (0.371) & (0.466)\\ $\psi^{*}_{CS-3}(W,\eta;\theta)$ & -0.934 & -0.748 & -0.061 & -0.121 & 0.135 & 8.760 & 34.956 & 10.029\\ & (0.378) & (0.345) & (0.198) & (0.434) & (0.426) & (0.688) & (1.127) & (0.648)\\ $\psi^{*}_{CS-4}(W,\eta;\theta)$ & -0.869 & -0.294 & -0.118 & -1.013 & -1.334 & -2.409 & -1.385 & -1.465\\ & (0.384) & (0.322) & (0.185) & (0.440) & (0.436) & (0.553) & (0.494) & (0.502)\\ $\psi^{'}_{CS-4}(W,\eta;\theta)$ & 0.903 & 0.800 & 0.284 & 0.834 & 0.501 & 0.142 & 0.842 & 0.793\\ & (0.498) & (0.514) & (0.490) & (0.486) & (0.485) & (0.485) & (0.485) & (0.487)\\ IPW & & & & 1.470 & 0.148 & - & & \\ & & & & (0.546) & (0.529) & - & & \\ \bottomrule \bottomrule \end{tabular} \begin{tablenotes} \item Results for estimators with the Ensemble, the Random Forest and the Lasso were obtained using the cross-fitting procedure in Section \ref{sec:estinf} with $K=2$. The Ensemble Learner comprises Lasso and Random Forest. For Lasso the penalty term was chosen such that the cross-validation criterion was minimized. The Ensemble weights were chosen by minimizing out-of-sample MSE. Results for estimators with Linear/Logit regression were obtained by using the sample plug-in estimator without cross-fitting. The Logit was used for the propensity scores and linear regression for the outcome nuisances. The sample contains $N=276127$ observations. Standard errors are in parenthesis. \end{tablenotes} \end{threeparttable} \end{table} \subsection{Main results} The placebo tests for the different estimators suggest some trustworthiness of estimators that use the Ensemble Learner for first stage prediction. Table \ref{tab:resultscs} summarizes the main results for the difference-in-differences estimators considered. For some of the specifications for the single predictors and the linear estimators we obtain extreme results. In general the same conclusions as for the placebo test apply. We notice that the particular first stage estimator and the estimation-robustness of the score used can drastically shift the results. Whereas estimators for (CS-1)-(CS-3) that use the Ensemble Learner unambiguously give negative or insignificant effects of reasonable size, again some of the other estimators explode. In addition, the Ensemble Learner based estimators mostly become insignificant when more covariates are included in the model. In contrast, to estimators with parametric first stages, this is not due to an increase in standard errors if more covariates are included in the model. Rather, the decreased standard errors indicate that the Ensemble Learner allows to effectively exhaust the information of growing covariate sets. Lastly, we notice that for specifications `baseline' and `extended' the efficiency-robustness trade-off from Corollary \ref{cor:releffcs1} becomes relevant. Scores under setting (CS-1) generally have higher standard errors compared to efficient scores in settings (CS-2), (CS-3) and (CS-4). While for (CS-4) using $\psi^{'}_{CS-4}(W,\eta;\theta)$ leads to a substantially higher standard error compared to the efficient score, for (CS-2) there is barely a difference. Also the point estimators are similar. This is expected since the score $\psi^{'}_{CS-2}(W,\eta;\theta)$ converges under similar conditions when the outcome is bounded. \section{Conclusion}\label{sec:conc} Semiparametric difference-in-differences estimation is a non-trivial endeavour. In this study we highlight the importance of different assumptions in semiparametric difference-in-differences models. Our results show that efficiency bounds may strongly depend on the model assumptions imposed and the data that is available. In particular, we show that there is a trade-off between the strength of the assumptions imposed and the variance lower bound that can be achieved. For estimation we provide easy-to-check conditions to derive the required convergence rates for a broad class of estimation problems. Our theoretical results allow to integrate scores with sophisticated adjustment terms in the double machine-learning framework. Further, we show that the different semiparametric models imply estimators with different properties. Estimators that are more robust against the model assumptions imposed also rely on more sophisticated conditions for first stage prediction. Some of these conditions can be relaxed when we give up on asymptotically attaining the efficiency bound. An empirical example shows that our proposed estimators are useful in practice. However, estimation results might be highly sensitive regarding the choice of the first stage nuisance parameter predictor. Placebo tests indicate that, in contrast to other choices, our proposed Ensemble Learner performs well.\\ Some interesting problems are beyond the scope of this study and have to be left for further research. The performance of the estimators proposed might depend on the parameter $K$. Some theoretical results how to optimally choose this parameter would be helpful. Our theoretical results for the different estimators suggest that the finite sample performance of the point estimators and the coverage probabilities of the variance estimators might be relatively diverse and depend on the particular DGP considered. Monte Carlo simulations could shed some more light on this subject. Also this study is limited to two time periods and two groups. Some recent advances in the semiparametric difference-in-differences literature (e.g., \cite{Callaway_SantAnna_2018}, \cite{GoodmanBacon_2018}) comprise extensions to more complicated adoption patterns. Further, in practice panel and cross-sectional data are often combined (e.g. in rotating panels). The results in Section \ref{sec:ideff} suggest that for these kind of data combination problems efficiency gains are possible if neither the panel structure is neglected nor the cross-sectional data is thrown away. A generalization of the efficiency theory provided in this study to these settings represents yet another avenue for further research. \printbibliography \begin{appendix} \section{Proofs for Section \ref{sec:ideff}} \subsection{Proof of Theorem \ref{thm:effcs}}\label{app:effcs} We observe the data $W=(Y,D,T,X)$. For the joint distribution of the data consider a regular parametric submodel indexed by $\beta$. The density under the submodel can then be written as \begin{align*} f_W(w;\beta)=f_{Y|D,T,X}(y|d,t,x;\beta)f_{D,T|X}(d,t|x;\beta)f_{X}(x;\beta) \end{align*} which equals $f_W(w)$ at $\beta=\beta_0$.\\ The score function is defined as $S(y,d,t,x;\beta_0)=\frac{\partial \log f_W(w;\beta_0)}{\partial \beta}$ and we obtain $S(y,d,t,x;\beta_0)=S_y(y,d,t,x;\beta_0)+S_{d,t}(d,t,x;\beta_0)+S_x(x;\beta_0)$ with \begin{align*} S&_y(y,d,t,x;\beta_0)=\frac{\partial \log f_{Y|D,T,X}(y|d,t,x;\beta_0)}{\partial \beta}\\ S&_{d,t}(d,t,x;\beta_0)=\frac{\partial \log f_{D,T|X}(d,t|x;\beta_0)}{\partial \beta}\\ S&_x(x;\beta_0)=\frac{\partial \log f_{X}(x;\beta_0)}{\partial \beta} \end{align*} where \begin{align*} S_y(y,d,t,x;\beta_0)&=\sum_{d=0}^1\sum_{t=0}^1g_{d,t}S_y(d,t,x;\beta_0) \end{align*} and $S_{d,t}(d,t,x;\beta_0)$ depends on the settings (CS-1) to (CS-5) on the relation between $D$, $T$ and $X$.\\~\\ For all regular parametric submodels the variance lower bound for a model is the second moment of the projection of a function $\psi^{*}_{CS}(W;\theta)$ (with $\mathbb{E}[\psi^{*}_{CS}(W;\theta)]=0$ and an existing second moment) on the tangent space $\mathcal{T}$ that satisfies \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}=\mathbb{E}\left[\psi^{*}_{CS}(W;\theta)S(Y,D,T,X;\beta_0)\right]. \end{align*} When $\psi^{*}_{CS}(W;\theta)\in\mathcal{T}$, the projection on $\mathcal{T}$ is the function itself and therefore the variance lower bound for the model is given by $\mathbb{E}[\psi^{*}_{CS}(W;\theta)^2]$. \subsubsection*{Proof for CS-1} Under the conditions in (CS-1) \begin{align*} S_{d,t}(d,t,x;\beta_0)=\sum_{d=0}^1\sum_{t=0}^1\frac{g_{d,t}}{p_{D=d,T=t}(x)}\dot{p}_{D=d,T=t}(x;\beta_0). \end{align*} The tangent space of the model is characterized by the set of functions that are mean zero and satisfy the structure of the score function \begin{align*} \mathcal{T}&=\Bigg\{\sum_{d=0}^1\sum_{t=0}^1\left(g_{d,t}S_y(d,t,x)+\frac{g_{d,t}}{p_{D=d,T=t}(x)}\dot{p}_{D=d,T=t}(x)\right)+S_x(x)\Bigg\} \end{align*} for any functions $S_y(d,t,x)$, $\dot{p}_{D=d,T=t}(x)$ and $S_x(x)$ that satisfy \begin{align*} \mathbb{E}&\left[G_{d,t}S_y(D,T,X)\right]=\mathbb{E}\left[p_{D=d,T=t}(X)\mathbb{E}\left[S_y(d,t,X)|D=d,T=t,X\right]\right]=0\\ \mathbb{E}&\left[\sum_{d=0}^1\sum_{t=0}^1\frac{G_{d,t}}{p_{D=d,T=t}(X)}\dot{p}_{D=d,T=t}(X)\right]=\mathbb{E}\left[\sum_{d=0}^1\sum_{t=0}^1\dot{p}_{D=d,T=t}(X)\right]=0\\ \mathbb{E}&\left[S_x(X)\right]=0 \end{align*} where the first and last equality follow by the mean zero property of the score function and the second equality by the fact that $\sum_{d=0}^1\sum_{t=0}^1\dot{p}_{D=d,T=t}(x)=0$ since $\sum_{d=0}^1\sum_{t=0}^1p_{D=d,T=t}(x)=1$.\\ The parameter $\theta$ is pathwise differentiable. For the parametric submodel we have \begin{align*} \theta(\beta)=\frac{\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int y f_{Y|D,T,X}(y|d,t,x;\beta)p_{D=1,T=1}(x;\beta)f_X(x;\beta)dydx}{\int p_{D=1,T=1}(x;\beta)f_X(x;\beta)dx} \end{align*} and at $\beta=\beta_0$ \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}&=\frac{1}{p_{DT}}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D=1,T=1}(x)f_X(x)dydx\right)\\ &+\frac{1}{p_{DT}}\int(m_Y(X)-\theta)\left(\dot{p}_{D=1,T=1}(x;\beta_0)+p_{D=1,T=1}(x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} We consider the function \begin{align*} \psi^{*}_{CS-1}(W;\theta)=\frac{p_{D=1,T=1}(X)}{p_{DT}}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\frac{G_{d,t}}{p_{D=d,T=t}(X)}(Y-m_Y(d,t,X))\right)+\frac{DT}{p_{DT}}(m_Y(X)-\theta). \end{align*} Notice that $\psi^{*}_{CS-1}(W;\theta)\in\mathcal{T}$. Also for any $D=d$, $T=t$ we obtain \begin{align*} \mathbb{E}&\left[\frac{p_{D=1,T=1}(X)}{p_{DT}}\frac{G_{d,t}}{p_{D=d,T=t}(X)}(Y-m_Y(d,t,X))\times S(Y,D,T,X;\beta_0)\right]\\ &=\mathbb{E}\left[\frac{p_{D=1,T=1}(X)}{p_{DT}}\mathbb{E}\left[YS_y(d,t,X;\beta_0)|D=d,T=t,X\right]\right]\\ &=\frac{1}{p_{DT}}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D=1,T=1}(x)f_X(x)dydx \end{align*} which follows from the fact that $\mathbb{E}\left[S_y(d,t,X;\beta_0)|D=d,T=t,X\right]=0$. Further, \begin{align*} \mathbb{E}&\left[\frac{DT}{p_{DT}}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right]\\ &=\frac{1}{p_{DT}}\mathbb{E}\left[(m_Y(X)-\theta)\left(\dot{p}_{D=1,T=1}(X;\beta_0)+p_{D=1,T=1}(X)S_x(X;\beta_0)\right)\right]\\ &=\frac{1}{p_{DT}}\int (m_Y(X)-\theta)\left(\dot{p}_{D=1,T=1}(x;\beta_0)+p_{D=1,T=1}(x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} It follows that $\psi^{*}_{CS-1}(W;\theta)$ is the efficient influence function and the variance lower bound for (CS-1) is $\mathbb{E}[\psi^{*}_{CS-1}(W;\theta)^2]$. \subsubsection*{Proof for CS-2} Under the conditions in (CS-2) \begin{align*} S_{d,t}(d,t,x;\beta_0)=\left(\frac{d}{p_D(x)}-\frac{1-d}{1-p_D(x)}\right)\dot{p}_D(x;\beta_0)+\left(\frac{t}{p_T(x)}-\frac{1-t}{1-p_T(x)}\right)\dot{p}_T(x;\beta_0). \end{align*} The tangent space of the model is characterized by the set of functions that are mean zero and satisfy the structure of the score function \begin{align*} \mathcal{T}&=\Bigg\{\sum_{d=0}^1\sum_{t=0}^1(g_{d,t}S_y(d,t,x))+(d-p_D(x))a(x)+(t-p_T(x))b(x)+S_x(x)\Bigg\} \end{align*} for any functions $S_y(d,t,x)$ and $S_x(x)$ that satisfy \begin{align*} \mathbb{E}&\left[S_y(d,t,X)|D=d,T=t,X\right]=0\\ \mathbb{E}&\left[S_x(X)\right]=0 \end{align*} and any square integrable functions $a(x)$ and $b(x)$.\\ The parameter $\theta$ is pathwise differentiable. For the parametric submodel we have \begin{align*} \theta(\beta)=\frac{\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int y f_{Y|D,T,X}(y|d,t,x;\beta)p_{D}(x;\beta)p_{T}(x;\beta)f_X(x;\beta)dydx}{\int p_{D}(x;\beta)p_{T}(x;\beta)f_X(x;\beta)dx} \end{align*} and at $\beta=\beta_0$ \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}&=\frac{1}{p_{DT}}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D}(x)p_T(x)f_X(x)dydx\right)\\ &+\frac{1}{p_{DT}}\int(m_Y(X)-\theta)\left(\dot{p}_{D}(x;\beta_0)p_T(x)+p_{D}(x)\dot{p}_T(x;\beta_0)+p_D(x)p_T(x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} We consider the function \begin{align*} \psi^{*}_{CS-2}(W;\theta)&=\frac{p_{D}(X)p_T(X)}{p_{DT}}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\frac{G_{d,t}}{p_{D=d}(X)p_{T=t}(X)}(Y-m_Y(d,t,X))\right)\\ &+\left(\frac{p_T(X)(D-p_D(X))}{p_{DT}}+\frac{p_D(X)(T-p_T(X))}{p_{DT}}+\frac{p_D(X)p_T(X)}{p_{DT}}\right)(m_Y(X)-\theta). \end{align*} Notice that $\psi^{*}_{CS-2}(W;\theta)\in\mathcal{T}$. Also for any $D=d$, $T=t$ we obtain \begin{align*} \mathbb{E}&\left[\frac{p_{D}(X)p_{T}(X)}{p_{DT}}\frac{G_{d,t}}{p_{D=d}(X)p_{T=t}(X)}(Y-m_Y(d,t,X))\times S(Y,D,T,X;\beta_0)\right]\\ &=\frac{1}{p_{DT}}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D}(x)p_T(x)f_X(x)dydx. \end{align*} Further, \begin{align*} &\mathbb{E}\left[\frac{p_T(X)D}{p_{DT}}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right] =\frac{1}{p_{DT}}\mathbb{E}\left[(m_Y(X)-\theta)\left(\dot{p}_{D}(X;\beta_0)p_T(X)+p_{D}(X)p_T(X)S_x(X;\beta_0)\right)\right]\\ &\mathbb{E}\left[\frac{p_D(X)T}{p_{DT}}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right] =\frac{1}{p_{DT}}\mathbb{E}\left[(m_Y(X)-\theta)\left(p_D(X)\dot{p}_{T}(X;\beta_0)+p_{D}(X)p_T(X)S_x(X;\beta_0)\right)\right]\\ &\mathbb{E}\left[\frac{p_D(X)p_T(X)}{p_{DT}}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right] =\frac{1}{p_{DT}}\mathbb{E}\left[(m_Y(X)-\theta)\left(p_{D}(X)p_T(X)S_x(X;\beta_0)\right)\right] \end{align*} such that \begin{align*} \mathbb{E}&\left[\left(\frac{p_T(X)(D-p_D(X))}{p_{DT}}+\frac{p_D(X)(T-p_T(X))}{p_{DT}}+\frac{p_D(X)p_T(X)}{p_{DT}}\right)(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right]\\ &=\frac{1}{p_{DT}}\int(m_Y(X)-\theta)\left(\dot{p}_{D}(x;\beta_0)p_T(x)+p_{D}(x)\dot{p}_T(x;\beta_0)+p_D(x)p_T(x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} It follows that $\psi^{*}_{CS-2}(W;\theta)$ is the efficient influence function and the variance lower bound for (CS-2) is $\mathbb{E}[\psi^{*}_{CS-2}(W;\theta)^2]$. \subsubsection*{Proof for CS-3} Under the conditions in (CS-3) \begin{align*} S_{d,t}(d,t,x;\beta_0)&=t\left(\frac{d}{p_D(1,x)}-\frac{1-d}{1-p_D(1,x)}\right)\dot{p}_D(1,x;\beta_0)+(1-t)\left(\frac{d}{p_D(0,x)}-\frac{1-d}{1-p_D(0,x)}\right)\dot{p}_D(0,x;\beta_0)\\ &+\left(\frac{t}{p_T}-\frac{1-t}{1-p_T}\right)\dot{p}_T(\beta_0). \end{align*} The tangent space of the model is characterized by the set of functions that are mean zero and satisfy the structure of the score function \begin{align*} \mathcal{T}&=\Bigg\{\sum_{d=0}^1\sum_{t=0}^1(g_{d,t}S_y(d,t,x))+t(d-p_D(1,x))a(x)+(1-t)(d-p_D(0,x))b(x)+(t-p_T)c+S_x(x)\Bigg\} \end{align*} for any functions $S_y(d,t,x)$ and $S_x(x)$ that satisfy \begin{align*} \mathbb{E}&\left[S_y(d,t,X)|D=d,T=t,X\right]=0\\ \mathbb{E}&\left[S_x(X)\right]=0 \end{align*} and any square integrable functions $a(x)$ and $b(x)$ and some constant $c$.\\ The parameter $\theta$ is pathwise differentiable. For the parametric submodel we have \begin{align*} \theta(\beta)=\frac{\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int y f_{Y|D,T,X}(y|d,t,x;\beta)p_{D}(1,x;\beta)f_X(x;\beta)dydx}{\int p_{D}(1,x;\beta)f_X(x;\beta)dx} \end{align*} and at $\beta=\beta_0$ \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}&=\frac{1}{p_{D}(1)}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D}(1,x)f_X(x)dydx\right)\\ &+\frac{1}{p_{D}(1)}\int(m_Y(X)-\theta)\left(\dot{p}_{D}(1,x;\beta_0)+p_D(1,x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} We consider the function \begin{align*} \psi^{*}_{CS-3}(W;\theta)&=\frac{p_{D}(1,X)}{p_{D}(1)}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\frac{G_{d,t}}{p_{D=d}(t,X)p_{T=t}}(Y-m_Y(d,t,X))\right)\\ &+\left(\frac{T(D-p_D(1,X))}{p_{D}(1)p_T}+\frac{p_D(1,X)}{p_{D}(1)}\right)(m_Y(X)-\theta). \end{align*} Notice that $\psi^{*}_{CS-3}(W;\theta)\in\mathcal{T}$. Also for any $D=d$, $T=t$ we obtain \begin{align*} \mathbb{E}&\left[\frac{p_{D}(1,X)}{p_{D}(1)}\frac{G_{d,t}}{p_{D=d}(t,X)p_{T=t}}(Y-m_Y(d,t,X))\times S(Y,D,T,X;\beta_0)\right]\\ &=\frac{1}{p_{D}(1)}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D}(1,x)f_X(x)dydx. \end{align*} Further, \begin{align*} &\mathbb{E}\left[\frac{DT}{p_{D}(1)p_T}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right]\\ &=\frac{1}{p_{D}(1)}\mathbb{E}\left[(m_Y(X)-\theta)\left(\dot{p}_{D}(1,X;\beta_0)+p_D(1,X)\frac{\dot{p}_T(\beta_0)}{p_T}+p_{D}(1,X)S_x(X;\beta_0)\right)\right]\\ &\mathbb{E}\left[\frac{p_D(1,X)T}{p_{D}(1)}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right] =\frac{1}{p_{D}(1)}\mathbb{E}\left[(m_Y(X)-\theta)\left(p_D(1,X)\frac{\dot{p}_T(\beta_0)}{p_T}+p_{D}(1,X)S_x(X;\beta_0)\right)\right]\\ &\mathbb{E}\left[\frac{p_D(1,X)}{p_{D}(1)}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right] =\frac{1}{p_{D}(1)}\mathbb{E}\left[(m_Y(X)-\theta)\left(p_{D}(1,X)S_x(X;\beta_0)\right)\right] \end{align*} such that \begin{align*} \mathbb{E}&\left[\left(\frac{T(D-p_D(1,X))}{p_{D}(1)p_T}+\frac{p_D(1,X)}{p_{D}(1)}\right)(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right]\\ &=\frac{1}{p_{D}(1)}\int(m_Y(X)-\theta)\left(\dot{p}_{D}(1,x;\beta_0)+p_D(1,x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} It follows that $\psi^{*}_{CS-3}(W;\theta)$ is the efficient influence function and the variance lower bound for (CS-3) is $\mathbb{E}[\psi^{*}_{CS-3}(W;\theta)^2]$. \subsubsection*{Proof for CS-4} A similar proof can be found in \textcite{SantAnna_Zhao_2020}. We give it here for completeness.\\ Under the conditions in (CS-4) \begin{align*} S_{d,t}(d,t,x;\beta_0)=\left(\frac{d}{p_D(x)}-\frac{1-d}{1-p_D(x)}\right)\dot{p}_D(x;\beta_0)+\left(\frac{t}{p_T}-\frac{1-t}{1-p_T}\right)\dot{p}_T(\beta_0). \end{align*} The tangent space of the model is characterized by the set of functions that are mean zero and satisfy the structure of the score function \begin{align*} \mathcal{T}&=\Bigg\{\sum_{d=0}^1\sum_{t=0}^1(g_{d,t}S_y(d,t,x))+(d-p_D(x))a(x)+(t-p_T)b+S_x(x)\Bigg\} \end{align*} for any functions $S_y(d,t,x)$ and $S_x(x)$ that satisfy \begin{align*} \mathbb{E}&\left[S_y(d,t,X)|D=d,T=t,X\right]=0\\ \mathbb{E}&\left[S_x(X)\right]=0 \end{align*} and any square integrable functions $a(x)$ and some constant $b$.\\ The parameter $\theta$ is pathwise differentiable. For the parametric submodel we have \begin{align*} \theta(\beta)=\frac{\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int y f_{Y|D,T,X}(y|d,t,x;\beta)p_{D}(x;\beta)f_X(x;\beta)dydx}{\int p_{D}(x;\beta)f_X(x;\beta)dx} \end{align*} and at $\beta=\beta_0$ \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}&=\frac{1}{p_{D}}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D}(x)f_X(x)dydx\right)\\ &+\frac{1}{p_{D}}\int(m_Y(X)-\theta)\left(\dot{p}_{D}(x;\beta_0)+p_D(x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} We consider the function \begin{align*} \psi^{*}_{CS-4}(W;\theta)=\frac{p_{D}(X)}{p_{D}}\left(\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\frac{G_{d,t}}{p_{D=d}(X)p_{T=t}}(Y-m_Y(d,t,X))\right)+\frac{D}{p_D}(m_Y(X)-\theta). \end{align*} Notice that $\psi^{*}_{CS-4}(W;\theta)\in\mathcal{T}$. Also for any $D=d$, $T=t$ we obtain \begin{align*} \mathbb{E}&\left[\frac{p_{D}(X)}{p_{D}}\frac{G_{d,t}}{p_{D=d}(X)p_{T=t}}(Y-m_Y(d,t,X))\times S(Y,D,T,X;\beta_0)\right]\\ &=\frac{1}{p_{D}}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)p_{D}(x)f_X(x)dydx. \end{align*} Further, \begin{align*} \mathbb{E}\left[\frac{D}{p_{D}}(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right]=\frac{1}{p_{D}}\mathbb{E}\left[(m_Y(X)-\theta)\left(\dot{p}_{D}(X;\beta_0)+p_D(X)S_x(X;\beta_0)\right)\right]. \end{align*} It follows that $\psi^{*}_{CS-4}(W;\theta)$ is the efficient influence function and the variance lower bound for (CS-4) is $\mathbb{E}[\psi^{*}_{CS-4}(W;\theta)^2]$. \subsubsection*{Proof for CS-5} Under the conditions in (CS-5) \begin{align*} S_{d,t}(d,t,x;\beta_0)=\left(\frac{d}{p_D}-\frac{1-d}{1-p_D}\right)\dot{p}_D(\beta_0)+\left(\frac{t}{p_T}-\frac{1-t}{1-p_T}\right)\dot{p}_T(\beta_0). \end{align*} The tangent space of the model is characterized by the set of functions that are mean zero and satisfy the structure of the score function \begin{align*} \mathcal{T}&=\Bigg\{\sum_{d=0}^1\sum_{t=0}^1(g_{d,t}S_y(d,t,x))+(d-p_D)a+(t-p_T)b+S_x(x)\Bigg\} \end{align*} for any functions $S_y(d,t,x)$ and $S_x(x)$ that satisfy \begin{align*} \mathbb{E}&\left[S_y(d,t,X)|D=d,T=t,X\right]=0\\ \mathbb{E}&\left[S_x(X)\right]=0 \end{align*} and the constants $a$ and $b$.\\ The parameter $\theta$ is pathwise differentiable. For the parametric submodel we have \begin{align*} \theta(\beta)=\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int y f_{Y|D,T,X}(y|d,t,x;\beta)f_X(x;\beta)dydx \end{align*} and at $\beta=\beta_0$ \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}&=\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)f_X(x)dydx\\ &+\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\int\int yf_{Y|D,T,X}(y|d,t,x)S_x(X;\beta_0)f_X(x)dydx. \end{align*} We consider the function \begin{align*} \psi^{*}_{CS-5}(W;\theta)=\sum_{d=0}^1\sum_{t=0}^1(-1)^{(d+t)}\frac{G_{d,t}}{p_{D=d}p_{T=t}}(Y-m_Y(d,t,X))+m_Y(X)-\theta. \end{align*} Notice that $\psi^{*}_{CS-5}(W;\theta)\in\mathcal{T}$. Also for any $D=d$, $T=t$ we obtain \begin{align*} \mathbb{E}&\left[\frac{G_{d,t}}{p_{D=d}p_{T=t}}(Y-m_Y(d,t,X))\times S(Y,D,T,X;\beta_0)\right]\\ &=\int\int yS_y(d,t,x;\beta_0)f_{Y|D,T,X}(y|d,t,x)f_X(x)dydx. \end{align*} Further, \begin{align*} \mathbb{E}\left[(m_Y(X)-\theta)\times S(Y,D,T,X;\beta_0)\right]=\mathbb{E}\left[m_Y(X)S_x(X;\beta_0)\right]. \end{align*} It follows that $\psi^{*}_{CS-5}(W;\theta)$ is the efficient influence function and the variance lower bound for (CS-5) is $\mathbb{E}[\psi^{*}_{CS-5}(W;\theta)^2]$. \subsection{Proof of Corollary \ref{cor:releffcs1}}\label{app:releffcs1} From Theorem \ref{thm:effcs} we directly obtain \begin{align*} &\mathbb{E}\left[\psi^{*}_{CS}(W;\theta)^2\right]=\mathbb{E}\Bigg[\frac{q_{CS;D=1,T=1}(X)^2}{q_{CS;DT}^2}\Bigg(\mathbb{E}\left[\psi^{a}_{CS}(W)^2|X\right]+\frac{q_{CS;DT}^2}{q_{CS;D=1,T=1}(X)^2}\mathbb{E}\left[\psi^{b}_{CS}(W)^2|X\right]\left(m_Y(X)-\theta\right)^2\\ &+2\frac{q_{CS;DT}}{q_{CS;D=1,T=1}(X)}\mathbb{E}\left[\psi^{a}_{CS}(W)\psi^{b}_{CS}(W)|X\right]\left(m_Y(X)-\theta\right)\Bigg)\Bigg] \end{align*} where \begin{align*} &\mathbb{E}\left[\psi^{a}_{CS}(W)^2|X\right]=\mathbb{E}\left[\sum_{d=0}^1\sum_{t=0}^1\frac{\text{Var}(Y|D=d,T=t,X)}{q_{CS;D=d,T=t(X)}}\right]\\ &\mathbb{E}\left[\psi^{a}_{CS}(W)\psi^{b}_{CS}(W)|X\right]=0 \end{align*} for all settings (CS-1) to (CS-5). Further, we obtain \begin{align*} &\mathbb{E}\left[\psi^{b}_{CS-1}(W)^2|X\right]=\frac{p_{D=1,T=1}(X)^2}{p_{DT}^2}\frac{1}{p_{D=1,T=1}(X)}\\ &\mathbb{E}\left[\psi^{b}_{CS-2}(W)^2|X\right]=\frac{p_{D}(X)^2p_T(X)^2}{p_{DT}^2}\left(\frac{1}{p_D(X)}+\frac{1}{p_T(X)}-1\right)\\ &\mathbb{E}\left[\psi^{b}_{CS-3}(W)^2|X\right]=\frac{p_{D}(1,X)^2}{p_{D}(1)^2}p_T\left(\frac{1}{p_D(1,X)}+\frac{1}{p_T}-1\right)\\ &\mathbb{E}\left[\psi^{b}_{CS-4}(W)^2|X\right]=\frac{p_{D}(X)^2}{p_{D}^2}\frac{1}{p_D(X)}\\ &\mathbb{E}\left[\psi^{b}_{CS-5}(W)^2|X\right]=1. \end{align*} Suppose that any of the assumptions in settings (CS-2)-(CS-5) is true then the efficiency bound of (CS-5) is higher by \begin{align*} \Delta_{CS-1,CS}=\mathbb{E}\left[\frac{q_{CS;D=1,T=1}(X)^2}{q_{CS;DT}^2}\left(m_Y(X)-\theta\right)^2\left(\frac{1}{q_{CS;D=1,T=1}(X)}-\frac{q_{CS;DT}^2}{q_{CS;D=1,T=1}(X)^2}\mathbb{E}\left[\psi^{b}_{CS}(W)^2|X\right]\right)\right]. \end{align*} Similar arguments can be made for the comparison of the other bounds. \subsection{Proof of Theorem \ref{thm:effpa}}\label{app:effpa} We observe the data $W=(Y(0),Y(1),D,X)$. For the distribution of the data consider a regular parametric submodel indexed by $\beta$. The density under the submodel can then be written as \begin{align*} f_W(w;\beta)=f_{Y(0),Y(1)|D,X}(y(0),y(1)|d,x;\beta)f_{D|X}(d|x;\beta)f_{X}(x;\beta) \end{align*} which equals $f_W(w)$ at $\beta=\beta_0$.\\ The score function is defined as $S(y(0),y(1),d,x;\beta_0)=\frac{\partial \log f_W(w;\beta_0)}{\partial \beta}$ and we obtain $S(y(0),y(1),d,x;\beta_0)=S_{y(0),y(1)}(y(0),y(1),d,x;\beta_0)+S_{d}(d,x;\beta_0)+S_x(x;\beta_0)$ with \begin{align*} S&_{y(0),y(1)}(d,x;\beta_0)=\frac{\partial \log f_{Y(0),Y(1)|D,X}(y(0),y(1)|d,x;\beta_0)}{\partial \beta}\\ S&_{d}(d,x;\beta_0)=\frac{\partial \log f_{D|X}(d|x;\beta_0)}{\partial \beta}\\ S&_x(x;\beta_0)=\frac{\partial \log f_{X}(x;\beta_0)}{\partial \beta} \end{align*} where \begin{align*} S_{y(0),y(1)}(y(0),y(1),d,x;\beta_0)&=dS_{y(0),y(1)}(1,x;\beta_0)+(1-d)S_{y(0),y(1)}(0,x;\beta_0) \end{align*} and $S_{d}(d,x;\beta_0)$ again depends on the assumptions (PA-1) and (PA-2).\\~\\ For all regular parametric submodels the variance lower bound for a model is the second moment of the projection of a function $\psi^{*}_{PA}(W;\theta)$ (with $\mathbb{E}[\psi^{*}_{PA}(W;\theta)]=0$ and an existing second moment) on the tangent space $\mathcal{T}$ that satisfies \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}=\mathbb{E}\left[\psi^{*}_{PA}(W;\theta)S(Y(0),Y(1),D,X;\beta_0)\right]. \end{align*} When $\psi^{*}_{PA}(W;\theta)\in\mathcal{T}$, the projection on $\mathcal{T}$ is the function itself and therefore the variance lower bound for the model is given by $\mathbb{E}[\psi^{*}_{PA}(W;\theta)^2]$. \subsubsection*{Proof for PA-1} Under the conditions in (PA-1) \begin{align*} S_{d}(d,x;\beta_0)=\left(\frac{d}{p_D(x)}-\frac{1-d}{1-p_D(x)}\right)\dot{p}_D(x;\beta_0). \end{align*} The tangent space of the model is characterized by the set of functions that are mean zero and satisfy the structure of the score function \begin{align*} \mathcal{T}&=\Bigg\{dS_{y(0),y(1)}(1,x))+(1-d)S_{y(0),y(1)}(0,x))+(d-p_D(x))a(x)+S_x(x)\Bigg\} \end{align*} for any functions $S_{y(0),y(1)}(d,x)$ and $S_x(x)$ that satisfy \begin{align*} \mathbb{E}&\left[S_{y(0),y(1)}(d,X)|D=d,X\right]=0\\ \mathbb{E}&\left[S_x(X)\right]=0 \end{align*} and any square integrable function $a(x)$.\\ The parameter $\theta$ is pathwise differentiable. For the parametric submodel we have \begin{align*} \theta(\beta)&=\frac{\int\int\int \left(y(1)-y(0)\right) f_{Y(0),Y(1)|D,X}(y(0),y(1)|1,x;\beta)p_{D}(x;\beta)f_X(x;\beta)dy(0)dy(1)dx}{\int p_{D}(x;\beta)f_X(x;\beta)dx}\\ &-\frac{\int\int\int \left(y(1)-y(0)\right) f_{Y(0),Y(1)|D,X}(y(0),y(1)|0,x;\beta)p_{D}(x;\beta)f_X(x;\beta)dy(0)dy(1)dx}{\int p_{D}(x;\beta)f_X(x;\beta)dx} \end{align*} and at $\beta=\beta_0$ \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}&=\frac{1}{p_{D}}\int\int\int \left(y(1)-y(0)\right)S_{y(0),y(1)}(1,x;\beta_0)f_{Y(0),Y(1)|D,X}(y(0),y(1)|1,x)p_{D}(x)f_X(x)dy(0)dy(1)dx\\ &-\frac{1}{p_{D}}\int\int\int \left(y(1)-y(0)\right)S_{y(0),y(1)}(0,x;\beta_0)f_{Y(0),Y(1)|D,X}(y(0),y(1)|0,x)p_{D}(x)f_X(x)dy(0)dy(1)dx\\ &+\frac{1}{p_{D}}\int(m_{\Delta Y}(x)-\theta)\left(\dot{p}_{D}(x;\beta_0)+p_{D}(x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} We consider the function \begin{align*} \psi^{*}_{PA-1}(W;\theta)&=\frac{p_{D}(X)}{p_{D}}\left(\frac{D}{p_{D}(X)}(Y(1)-Y(0)-m_{\Delta Y}(1,X))-\frac{1-D}{1-p_{D}(X)}(Y(1)-Y(0)-m_{\Delta Y}(0,X))\right)\\ &+\frac{D}{p_D}\left(m_{\Delta Y}(X)-\theta\right). \end{align*} Notice that $\psi^{*}_{PA-1}(W;\theta)\in\mathcal{T}$. Also we obtain \begin{align*} \mathbb{E}&\left[\frac{p_{D}(X)}{p_{D}}\frac{D}{p_{D}(X)}(Y(1)-Y(0)-m_{\Delta Y}(1,X))\times S(Y,D,X;\beta_0)\right]\\ &=\frac{1}{p_{D}}\int\int\int \left(y(1)-y(0)\right)S_{y(0),y(1)}(1,x;\beta_0)f_{Y(0),Y(1)|D,X}(y(0),y(1)|1,x)p_{D}(x)f_X(x)dy(0)dy(1)dx\\ \mathbb{E}&\left[\frac{p_{D}(X)}{p_{D}}\frac{1-D}{1-p_{D}(X)}(Y(1)-Y(0)-m_{\Delta Y}(0,X))\times S(Y,D,X;\beta_0)\right]\\ &=\frac{1}{p_{D}}\int\int\int \left(y(1)-y(0)\right)S_{y(0),y(1)}(0,x;\beta_0)f_{Y(0),Y(1)|D,X}(y(0),y(1)|0,x)p_{D}(x)f_X(x)dy(0)dy(1)dx. \end{align*} Further, \begin{align*} &\mathbb{E}\left[\frac{D}{p_{D}}(m_{\Delta Y}(X)-\theta)\times S(Y,D,X;\beta_0)\right] =\frac{1}{p_{D}}\int(m_{\Delta Y}(x)-\theta)\left(\dot{p}_{D}(x;\beta_0)+p_{D}(x)S_x(x;\beta_0)\right)f_X(x)dx. \end{align*} It follows that $\psi^{*}_{PA-1}(W;\theta)$ is the efficient influence function and the variance lower bound for (PA-1) is $\mathbb{E}[\psi^{*}_{PA-1}(W;\theta)^2]$. \subsubsection*{Proof for PA-2} Under the conditions in (PA-2) \begin{align*} S_{d}(d,x;\beta_0)=\left(\frac{d}{p_D}-\frac{1-d}{1-p_D}\right)\dot{p}_D(\beta_0). \end{align*} The tangent space of the model is characterized by the set of functions that are mean zero and satisfy the structure of the score function \begin{align*} \mathcal{T}&=\Bigg\{dS_{y(0),y(1)}(1,x))+(1-d)S_{y(0),y(1)}(0,x))+(d-p_D)a+S_x(x)\Bigg\} \end{align*} for any functions $S_{y(0),y(1)}(d,x)$ and $S_x(x)$ that satisfy \begin{align*} \mathbb{E}&\left[S_{y(0),y(1)}(d,X)|D=d,X\right]=0\\ \mathbb{E}&\left[S_x(X)\right]=0 \end{align*} and a constant $a$.\\ The parameter $\theta$ is pathwise differentiable. For the parametric submodel we have \begin{align*} \theta(\beta)&=\int\int\int \left(y(1)-y(0)\right) f_{Y(0),Y(1)|D,X}(y(0),y(1)|1,x;\beta)f_X(x;\beta)dy(0)dy(1)dx\\ &-\int\int\int \left(y(1)-y(0)\right) f_{Y(0),Y(1)|D,X}(y(0),y(1)|0,x;\beta)f_X(x;\beta)dy(0)dy(1)dx \end{align*} and at $\beta=\beta_0$ \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}&=\int\int\int \left(y(1)-y(0)\right)S_{y(0),y(1)}(1,x;\beta_0)f_{Y(0),Y(1)|D,X}(y(0),y(1)|1,x)f_X(x)dy(0)dy(1)dx\\ &-\int\int\int \left(y(1)-y(0)\right)S_{y(0),y(1)}(0,x;\beta_0)f_{Y(0),Y(1)|D,X}(y(0),y(1)|0,x)f_X(x)dy(0)dy(1)dx\\ &+\int m_{\Delta Y}(x)S_x(x;\beta_0)f_X(x)dx. \end{align*} We consider the function \begin{align*} \psi^{*}_{PA-2}(W;\theta)&=\frac{D}{p_{D}}(Y(1)-Y(0)-m_{\Delta Y}(1,X))-\frac{1-D}{1-p_{D}}(Y(1)-Y(0)-m_{\Delta Y}(0,X))+m_{\Delta Y}(X)-\theta. \end{align*} Notice that $\psi^{*}_{PA-2}(W;\theta)\in\mathcal{T}$. Similarly to the proof for (PA-1) it can be shown that \begin{align*} \frac{\partial \theta(\beta_0)}{\partial \beta}=\mathbb{E}\left[\psi^{*}_{PA-2}(W;\theta)\times S(Y,D,X;\beta_0)\right]. \end{align*} It follows that $\psi^{*}_{PA-2}(W;\theta)$ is the efficient influence function and the variance lower bound for (PA-2) is $\mathbb{E}[\psi^{*}_{PA-2}(W;\theta)^2]$. \subsection{Proof of Corollary \ref{cor:releffcspa}}\label{app:releffcspa} Notice that for cross-sectional data we can write $Y=TY(1)+(1-T)Y(0)$ and we have $m_{Y}(d,t,X)=\mathbb{E}\left[Y(t)|D=d,X\right]$. Further, under the conditions of a panel for case (CS-1) we obtain $p_{D=d,T=t}(X)=\frac{1}{2}p_{D}(X)$. Hence, when we observe the panel structure the efficiency bound for (CS-1) is \begin{align*} \mathbb{E}\left[2\frac{p_D(X)^2}{p_D^2}\left(\sum_{d=0}^1\sum_{t=0}^1\frac{\text{Var}(Y(t)|D=d,X)}{p_{D=d}(X)}+\frac{(m_{\Delta Y}(X)-\theta)^2}{p_D(X)}\right)\right]. \end{align*} For \begin{align*} \Delta_{CS-1,PA-1}=\mathbb{E}\left[2\frac{p_D(X)^2}{p_D^2}\left(\sum_{d=0}^1\sum_{t=0}^1\frac{\text{Var}(Y(t)|D=d,X)}{p_{D=d}(X)}+\frac{(m_{\Delta Y}(X)-\theta)^2}{p_D(X)}\right)\right]-\mathbb{E}\left[\psi^{*}_{PA-1}(W;\theta)^2\right] \end{align*} the first claim immediately follows.\\ The second claim follows similarly by observing that when we observe the panel structure the efficiency bound for (CS-5) is \begin{align*} \mathbb{E}\left[2\sum_{d=0}^1\sum_{t=0}^1\frac{\text{Var}(Y(t)|D=d,X)}{p_{D=d}}+(m_{\Delta Y}(X)-\theta)^2\right]. \end{align*} \section{Proofs for Section \ref{sec:estinf}} \subsection{Lemma \ref{lm:firststage}} The general result of the lemma can be similarly found for example in \textcite{Chernozhukov_Chetverikov_Demirer_Duflo_Hansen_Newey_2017}. \begin{lemma}\label{lm:firststage} For a generic function $\psi(W,\cdot)$ let $\psi(W,\eta)$ be the version with true nuisance parameters and $\psi(W,\hat{\eta}_{-k})$ the version with cross-fitted nuisance parameters. Define $\bar{\psi}=\psi(W,\hat{\eta}_{-k})-\psi(W,\eta)$. Then under the conditions that (i) $\mathbb{E}\left[\bar{\psi}\right]=o_p\left(N^{-\frac{1}{2}}\right)$ and (ii) $\left\lVert\bar{\psi}\right\rVert_2=o_p(1)$ the term $\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\bar{\psi}_i$ is bounded such that \begin{align*} \left\lVert\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\bar{\psi}_i\right\rVert_2=o_p\left(N^{-\frac{1}{2}}\right). \end{align*} \end{lemma} \textit{Proof:}\\ Write \begin{align*} \left\lVert\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\bar{\psi}_i\right\rVert_2\leq\left\lVert\mathbb{E}\left[\bar{\psi}\right]\right\rVert_2+\left\lVert\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\left(\bar{\psi}_i-\mathbb{E}\left[\bar{\psi}\right]\right)\right\rVert_2 \end{align*} and notice that for every $k\in [1,...,K]$ \begin{align*} \mathbb{E}\left[\left\lVert\frac{1}{\sqrt{n}}\sum_{i\in\mathcal{I}^k}^n\left(\bar{\psi}_i-\mathbb{E}\left[\bar{\psi}\right]\right)\right\rVert_2^2\right]&=\mathbb{E}\left[\left\lVert\frac{1}{\sqrt{n}}\sum_{i\in\mathcal{I}^k}^n\left(\bar{\psi}_i-\mathbb{E}\left[\bar{\psi}\right]\right)\right\rVert_2^2\Bigg|W_{i\in\mathcal{I}^k}\right]\\ &\leq\mathbb{E}\left[\left(\frac{1}{\sqrt{n}}\sum_{i\in\mathcal{I}^k}^n\left(\bar{\psi}_i-\mathbb{E}\left[\bar{\psi}\right]\right)\right)^2\Bigg|W_{i\in\mathcal{I}^k}\right]\\ &\leq\sup_{\hat{\eta}}\mathbb{E}\left[\left(\psi(W,\hat{\eta}_{-k})-\psi(W,\eta)\right)^2\right] \end{align*} such that \begin{align*} \left\lVert\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\left(\bar{\psi}_i-\mathbb{E}\left[\bar{\psi}\right]\right)\right\rVert_2\leq\frac{1}{\sqrt{N}}\left\lVert\bar{\psi}\right\rVert_2. \end{align*} Thus, $\sqrt{N}\mathbb{E}\left[\bar{\psi}\right]=o_p(1)$ and $\sqrt{N}\left\lVert\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\left(\bar{\psi}_i-\mathbb{E}\left[\bar{\psi}\right]\right)\right\rVert_2=o_p(1)$ under the conditions (i) and (ii). \subsection{Proof of Theorem \ref{thm:estinf}}\label{app:estinf} We have to show that \begin{align*} \left\lVert\sqrt{N}\left(\hat{\theta}-\theta\right)-\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi(W_i,\eta;\theta)\right\rVert_2=o_p(1). \end{align*} This implies that $\hat{\theta}$ is asymptotically linear with influence function $\psi(W,\eta;\theta)$. Thus, given the above holds, the Lindeberg-L\'{e}vy Central Limit Theorem implies the second claim of the Theorem.\\ In order to derive the result write \begin{align*} \hat{\theta}-\theta&=\left(\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})\right)^{-1}\times\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi(W_i,\hat{\eta}_{-k};\theta)\\ &=\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi(W_i,\hat{\eta}_{-k};\theta)+\left(\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)}-1\right)\times\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi(W_i,\hat{\eta}_{-k};\theta)\\ &+\left(\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})}-\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)}\right)\times\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi(W_i,\hat{\eta}_{-k};\theta) \end{align*} and define \begin{align*} &i=\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\left(\psi(W_i,\hat{\eta}_{-k};\theta)-\psi(W_i,\eta;\theta)\right)\\ &ii=\left(\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)}-1\right)\\ &iii=\left(\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})}-\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)}\right). \end{align*} Suppose that $\left\lVert i\right\rVert_2=o_p\left(N^{-\frac{1}{2}}\right)$, $\left\lvert ii\right\rvert=O_p\left(N^{-\frac{1}{2}}\right)$, $\left\lvert iii\right\rvert=o_p\left(N^{-\frac{1}{2}}\right)$ and notice that $\left\lVert\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi(W_i,\eta;\theta)\right\rVert_2=O_p(1)$ and \begin{align*} \sqrt{N}\left(\hat{\theta}-\theta\right)-\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi(W_i,\eta;\theta)=\sqrt{N}i+\left(\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi(W_i,\eta;\theta)+\sqrt{N}i\right)\times\left(ii+iii\right). \end{align*} We obtain \begin{align*} \left\lVert\sqrt{N}\left(\hat{\theta}-\theta\right)-\frac{1}{\sqrt{N}}\sum_{i=1}^N\psi(W_i,\eta;\theta)\right\rVert_2&\leq o_p(1)+O_p(1)O_p\left(N^{-\frac{1}{2}}\right)+O_p(1)o_p\left(N^{-\frac{1}{2}}\right)+o_p(1)O_p\left(N^{-\frac{1}{2}}\right)\\ &+o_p(1)o_p\left(N^{-\frac{1}{2}}\right)\\ &=o_p(1). \end{align*} In the following we will derive the bounds for the different terms.\\~\\ \textit{Bounding i}\\ Notice that \begin{align*} &\psi(W,\hat{\eta}_{-k};\theta)-\psi(W,\eta;\theta)\\ &=\underbrace{\frac{\hat{q}_{1}(X)_{-k}}{q_1}\psi^a(W,\hat{\eta}_{-k})+\psi^b(W,\hat{\eta}_{-k})\hat{m}_{\tilde{Y}}(X)_{-k}-\frac{q_1(X)}{q_1}\psi^a(W,\eta)-\psi^b(W,\eta)m_{\tilde{Y}}(X)}_{ia}+\underbrace{\left(\psi^b(W,\eta)-\psi^b(W,\hat{\eta}_{-k})\right)}_{ib}\theta. \end{align*} Using Assumption \ref{ass:estinfjoint} (i) and (ii) and since \begin{align*} &\mathbb{E}\left[\psi(W,\hat{\eta}_{-k};\theta)-\psi(W,\eta;\theta)\right]=\mathbb{E}[ia]+\mathbb{E}[ib]\theta\quad\text{and}\\ &\left\lVert \psi(W,\hat{\eta}_{-k};\theta)-\psi(W,\eta;\theta)\right\rVert_2\leq\lVert ia\rVert_2+\lVert ib\rVert_2\theta, \end{align*} Lemma \ref{lm:firststage} implies that $\lVert i \rVert_2=o_p\left(N^{-\frac{1}{2}}\right)$ when $\mathbb{E}[ia]=o_p\left(N^{-\frac{1}{2}}\right)$ and $\lVert ia\rVert_2=o_p(1)$.\\ For a specific $G_{\tau}$ one can write \begin{align*} ia_{G_{\tau}}&=\underbrace{\frac{G_{\tau}}{q_1}\left(\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right)\left(m_{\tilde{Y}}(G_{\tau}=1,X)-\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}\right)}_{ia_{G_{\tau}}1}\\ &+\underbrace{\left(\frac{G_{\tau}}{q_1}\frac{q_1(X)}{q_{G_{\tau}}(X)}-\psi^b(W,\eta)\right)\left(m_{\tilde{Y}}(G_{\tau}=1,X)-\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}\right)}_{ia_{G_{\tau}}2}\\ &+\underbrace{\frac{G_{\tau}}{q_1}\left(\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right)\left(\tilde{Y}-m_{\tilde{Y}}(G_{\tau}=1,X)\right)}_{ia_{G_{\tau}}3}\\ &+\underbrace{\left(\psi^b(W,\hat{\eta}_{-k})-\psi(W,\eta)\right)\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}}_{ia_{G_{\tau}}4}. \end{align*} Notice that \begin{align*} \mathbb{E}\left[ia_{G_{\tau}}1\right]&\leq\left\lVert\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right\rVert_2\times\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2\\ &=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} by Assumption \ref{ass:estinfjoint} (v), $\mathbb{E}\left[ia_{G_{\tau}}2\right]=0$ by Assumption \ref{ass:estinfb} (i), $\mathbb{E}\left[ia_{G_{\tau}}3\right]=0$ by the Law of Iterated Expectations and $\mathbb{E}\left[ia_{G_{\tau}}4\right]=o_p\left(N^{-\frac{1}{2}}\right)$ by Assumption \ref{ass:estinfjoint} (iii).\\ Further, $\lVert ia_{G_{\tau}}\rVert_2\leq\lVert ia_{G_{\tau}}1\rVert_2+\lVert ia_{G_{\tau}}2\rVert_2+\lVert ia_{G_{\tau}}3\rVert_2+\lVert ia_{G_{\tau}}4\rVert_2$ and \begin{align*} \lVert ia_{G_{\tau}}1\rVert_2&\leq\left\lVert\frac{G_{\tau}}{q_1}\left(\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right)\right\rVert_{\infty}\times\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2\\ &\leq C \left\lVert\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}q_{G_{\tau}}(X)}\right\rVert_{\infty}\times\left\lVert\hat{q}_{G_{\tau}}(X)_{-k}-q_{G_{\tau}}(X)\right\rVert_{\infty}\times\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2\\ &+C\left\lVert\frac{1}{q_{G_{\tau}}(X)}\right\rVert_{\infty}\times\left\lVert\hat{q}_1(X)_{-k}-q_1(X)\right\rVert_{\infty}\times\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2\\ &=o_p(1) \end{align*} by Assumptions \ref{ass:estinfp} and \ref{ass:estinfjoint} (iv), \begin{align*} \lVert ia_{G_{\tau}}2\rVert_2&\leq\left\lVert\frac{G_{\tau}}{q_1}\frac{q_1(X)}{q_{G_{\tau}}(X)}-\psi^b(W,\eta)\right\rVert_{\infty}\times\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2\\ &=o_p(1) \end{align*} by Assumptions \ref{ass:estinfb} (iii) and \ref{ass:estinfjoint} (iv), \begin{align*} \lVert ia_{G_{\tau}}3\rVert_2&\leq C\left\lVert\left(\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right)G_{\tau}(\tilde{Y}-m_{\tilde{Y}}(G_{\tau}=1,X))\right\rVert_2\\ &=C\left(\mathbb{E}\left[\left(\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right)^2G_{\tau}(\tilde{Y}-m_{\tilde{Y}}(G_{\tau}=1,X))^2\right]\right)^{\frac{1}{2}}\\ &\leq C\left\lVert q_{G_{\tau}}(X)\right\rVert_{\infty}\left(\mathbb{E}\left[\left(\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right)^2\text{Var}(\tilde{Y}|G_{\tau}=1,X)\right]\right)^{\frac{1}{2}}\\ &\leq C\left\lVert\frac{\hat{q}_1(X)_{-k}}{\hat{q}_{G_{\tau}}(X)_{-k}}-\frac{q_1(X)}{q_{G_{\tau}}(X)}\right\rVert_2\\ &=o_p(1) \end{align*} by Assumptions \ref{ass:estinfp}, \ref{ass:estinfy} (ii) and \ref{ass:estinfjoint} (iv), and lastly \begin{align*} \lVert ia_{G_{\tau}}4\rVert_2&\leq\left\lVert\psi^b(W,\hat{\eta}_{-k})-\psi(W,\eta)\right\rVert_{\infty}\times\left\lVert\hat{m}_{\tilde{Y}}(G_{\tau}=1,X)_{-k}-m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2\\ &+\left\lVert\left(\psi^b(W,\hat{\eta}_{-k})-\psi^b(W,\eta)\right)m_{\tilde{Y}}(G_{\tau}=1,X)\right\rVert_2\\ &=o_p(1) \end{align*} by Assumptions \ref{ass:estinfjoint} (i), (iii) and (iv).\\~\\ \textit{Bounding ii} \begin{align*} \lvert ii\rvert&\leq\left\lvert1-\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)\right\rvert\times\left\lvert\left(1+\left(\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)-1\right)\right)^{-1}\right\rvert\\ &=O_p\left(N^{-\frac{1}{2}}\right)\left(1+o_p(1)\right)^{-1}\\ &=O_p\left(N^{-\frac{1}{2}}\right) \end{align*} since by a weak law of large numbers sample means should converge against their expectations and $\mathbb{E}\left[\psi^b(W,\eta)\right]=1$ by Assumption \ref{ass:estinfb} (i).\\~\\ \textit{Bounding iii} \begin{align*} \lvert iii\rvert&\leq\left\lvert\underbrace{\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})}}_{iiia}\right\rvert\times\left\lvert\underbrace{\frac{1}{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)}}_{iiib}\right\rvert\\ &\times\left\lvert\underbrace{\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\eta)-\frac{1}{N}\sum_{k=1}^K\sum_{i\in\mathcal{I}^k}^n\psi^b(W_i,\hat{\eta}_{-k})}_{iiic}\right\rvert \end{align*} where $\lvert iiia\rvert\leq\lVert iiia\rVert_{\infty}\leq C$, $\lvert iiib\rvert\leq\lVert iiib\rVert_{\infty}\leq C$ by Assumption \ref{ass:estinfb} (ii). Further, conditions in Assumption \ref{ass:estinfjoint} (i) and (ii) in combination with Lemmma \ref{lm:firststage} implies that $\lvert iiic\rvert=o_p\left(N^{-\frac{1}{2}}\right)$ with high probability. Therefore $\lvert iii\rvert=O_p(1)O_p(1)o_p\left(N^{-\frac{1}{2}}\right)=o_p\left(N^{-\frac{1}{2}}\right)$. \subsection{Proof of Corollary \ref{cor:estcs}}\label{app:estcs} $\mathbb{E}\left[\psi^*_{CS}(W,\eta;\theta)\right]=0$ using Assumptions \ref{ass:idcs} and the respective condition in settings (CS-1)-(CS-5). For all $\psi^{*b}_{CS}(W,\eta)$ it is easy to see that the conditions in Assumption \ref{ass:estinfb} are satisfied. Therefore it remains to show that the conditions in Assumptions \ref{ass:estinfjoint} hold under the convergence conditions stated in the corollary. \begin{itemize} \item[(a)] For $\psi^{*b}_{CS-1}(W,\eta)=\frac{DT}{p_{DT}}$ the conditions in Assumption \ref{ass:estinfjoint} (i)-(iii) are trivially satisfied. Notice that $m_Y(1,1,X)$ is redundant and that for all $(d,t)\in\{(0,1),(1,0),(0,0)\}$ \begin{align*} &\left\lVert\frac{\hat{p}_{D=1,T=1}(X)_{-k}}{\hat{p}_{D=d,T=t}(X)_{-k}}-\frac{p_{D=1,T=1}(X)}{p_{D=d,T=t}(X)}\right\rVert_2\\ &\leq\left\lVert\frac{1}{\hat{p}_{D=d,T=t}(X)_{-k}}\right\rVert_{\infty}\times\left\lVert\hat{p}_{D=1,T=1}(X)_{-k}-p_{D=1,T=1}(X)\right\rVert_2\\ &+\left\lVert\frac{p_{D=1,T=1}(X)}{\hat{p}_{D=d,T=t}(X)_{-k}p_{D=d,T=t}(X)}\right\rVert_{\infty}\times\left\lVert\hat{p}_{D=d,T=t}(X)_{-k}-p_{D=d,T=t}(X)\right\rVert_2\\ &\leq C\times\left(\epsilon_{p_{D=1,T=1}(X)}+\epsilon_{p_{D=d,T=t}(X)}\right) \end{align*} by Assumption \ref{ass:estinfp}. This immediately implies that Assumptions \ref{ass:estinfjoint} (iv) and (v) are satisfied under the conditions stated. \item[(b)] For $\psi^{*b}_{CS-2}(W,\eta)=\frac{Dp_T(X)+Tp_D(X)-p_D(X)p_T(X)}{p_{DT}}$ write \begin{align*} &\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\\ &=\frac{1}{p_{DT}}\left(D-p_D(X)\right)\left(\hat{p}_T(X)_{-k}-p_T(X)\right)+\frac{1}{p_{DT}}\left(T-p_T(X)\right)\left(\hat{p}_D(X)_{-k}-p_D(X)\right)\\ &+\frac{1}{p_{DT}}\left(\hat{p}_T(X)_{-k}-p_T(X)\right)\left(p_D(X)-\hat{p}_D(X)_{-k}\right). \end{align*} Then $\left\lVert\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\right\rVert_2\leq\epsilon_{p_T(X)}+\epsilon_{p_D(X)}=o_p(1)$ by Assumption \ref{ass:estinfp} and the convergence conditions stated. Similarly, $\left\lVert\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\right\rVert_{\infty}\leq\left\lVert\hat{p}_T(X)_{-k}-p_T(X)\right\rVert_{\infty}\times\left\lVert\hat{p}_D(X)_{-k}-p_D(X)\right\rVert_{\infty}=O_p(1)$ by Assumption \ref{ass:estinfp}. This verifies Assumption \ref{ass:estinfjoint} (i).\\ Also $\mathbb{E}\left[\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\right]\leq\epsilon_{p_T(X)}\times\epsilon_{p_D(X)}=o_p\left(N^{-\frac{1}{2}\frac{r}{r-1}}\right)$ which verifies Assumption \ref{ass:estinfjoint} (ii).\\ To verify Assumption \ref{ass:estinfjoint} (iii), notice that for $r>2$ \begin{align*} &\left\lVert\left(\hat{p}_T(X)_{-k}-p_T(X)\right)\left(p_D(X)-\hat{p}_D(X)_{-k}\right)\right\rVert_{\frac{r}{r-1}}\\ &=\left(\mathbb{E}\left[\left(\hat{p}_T(X)_{-k}-p_T(X)\right)^{\frac{r}{r-1}}\left(p_D(X)-\hat{p}_D(X)_{-k}\right)^{\frac{r}{r-1}}\right]\right)^\frac{r-1}{r}\\ &\leq\left\lVert\hat{p}_D(X)_{-k}-p_D(X)\right\rVert_{\infty}^{\frac{1}{r}}\times\left\lVert\hat{p}_T(X)_{-k}-p_T(X)\right\rVert_{\infty}^{\frac{1}{r}}\times\left(\epsilon_{p_T(X)}\times\epsilon_{p_D(X)}\right)^\frac{r-1}{r}\\ &=o_p\left(N^{-\frac{1}{2}}\right). \end{align*} Then for any $d,t\in\{0,1\}$ \begin{align*} &\mathbb{E}\left[\left(\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\right)\left(\hat{m}_Y(d,t,X)_{-k}-m_Y(d,t,X)\right)\right]\\ &\leq\left\lVert\left(\hat{p}_T(X)_{-k}-p_T(X)\right)\left(p_D(X)-\hat{p}_D(X)_{-k}\right)\right\rVert_{\frac{r}{r-1}}\times\left\lVert\hat{m}_Y(d,t,X)_{-k}-m_Y(d,t,X)\right\rVert_r\\ &=o_p\left(N^{-\frac{1}{2}}\right),\\ &\mathbb{E}\left[\left(\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\right)m_Y(d,t,X)\right]\\ &\leq\left\lVert\left(\hat{p}_T(X)_{-k}-p_T(X)\right)\left(p_D(X)-\hat{p}_D(X)_{-k}\right)\right\rVert_{\frac{r}{r-1}}\times\left\lVert m_Y(d,t,X)\right\rVert_r\\ &=o_p\left(N^{-\frac{1}{2}}\right) \end{align*} under the conditions stated in the corollary. Further, \begin{align*} \left\lVert\left(D-p_D(X)\right)\left(\hat{p}_T(X)_{-k}-p_T(X)\right)\right\rVert_{\frac{2r}{r-2}}&\leq\left\lVert\hat{p}_T(X)_{-k}-p_T(X)\right\rVert_{\frac{2r}{r-2}}\\ &\leq\left\lVert\hat{p}_T(X)_{-k}-p_T(X)\right\rVert_{\infty}^{\frac{r+2}{2r}}\times\left\lVert\hat{p}_T(X)_{-k}-p_T(X)\right\rVert_2^{\frac{r-2}{2r}}\\ &=o_p(1), \end{align*} similarly $\left\lVert\left(T-p_T(X)\right)\left(\hat{p}_D(X)_{-k}-p_D(X)\right)\right\rVert_{\frac{2r}{r-2}}=o_p(1)$ and \begin{align*} &\left\lVert\left(\hat{p}_T(X)_{-k}-p_T(X)\right)\left(p_D(X)-\hat{p}_D(X)_{-k}\right)\right\rVert_{\frac{2r}{r-2}}\\ &\leq\left\lVert\hat{p}_T(X)_{-k}-p_T(X)\right\rVert_{\infty}^{\frac{r+2}{2r}}\times\left\lVert\hat{p}_D(X)_{-k}-p_D(X)\right\rVert_{\infty}^{\frac{r+2}{2r}}\times\left(\epsilon_{p_T(X)}\times\epsilon_{p_D(X)}\right)^{\frac{r-2}{2r}}\\ &=o_p(1) \end{align*} by Assumption \ref{ass:estinfp}, the fact that $r>2$ and the convergence conditions stated. Then for all $d,t\in\{0,1\}$ \begin{align*} \left\lVert\left(\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\right)m_Y(d,t,X)\right\rVert_2&\leq\left\lVert\psi^{*b}_{CS-2}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-2}(W,\eta)\right\rVert_{\frac{2r}{r-2}}\times\left\lVert m_Y(d,t,X)\right\rVert_{r}\\ &=o_p(1). \end{align*} Assumption \ref{ass:estinfjoint} (iv) and (v) directly follow from the conditions stated. \item[(c)] For $\psi^{*b}_{CS-3}(W,\eta)=\frac{T\left(D-p_D(1,X)\right)}{p_{D}(1)p_T}+\frac{p_D(1,X)}{p_D(1)}$ write \begin{align*} \psi^{*b}_{CS-3}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-3}(W,\eta)=\frac{1}{p_D(1)p_T}\left(\left(p_T-T\right)\left(\hat{p}_{D}(1,X)_{-k}-p_D(1,X)\right)\right). \end{align*} Since \begin{align*} &\left\lVert\psi^{*b}_{CS-3}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-3}(W,\eta)\right\rVert_2\leq\epsilon_{p_D(1,X)}=o_p(1)\quad\text{and}\\ &\left\lVert\psi^{*b}_{CS-3}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-3}(W,\eta)\right\rVert_{\infty}\leq\left\lVert\hat{p}_{D}(1,X)_{-k}-p_D(1,X)\right\rVert_{\infty}=O_p(1) \end{align*} Assumption \ref{ass:estinfjoint} (i) is verified.\\ Assumptions \ref{ass:estinfjoint} (ii) holds since trivially $\mathbb{E}\left[\psi^{*b}_{CS-3}(W,\hat{\eta}_{-k})-\psi^{*b}_{CS-3}(W,\eta)\right]=0$ by the Law of Iterated Expectations. Similarly, for all $d,t\in\{0,1\}$ the first part of Assumption \ref{ass:estinfjoint} (iii) is verified by $\mathbb{E}\left[\left(\psi^{*b}_{CS-3}(W,\eta)-\psi^{*b}_{CS-3}(W,\hat{\eta}_{-k})\right)\hat{m}_Y(d,t,X)_{-k}\right]=0$. For the second part of Assumption \ref{ass:estinfjoint} (iii) write \begin{align*} \left\lVert\left(\psi^{*b}_{CS-3}(W,\eta)-\psi^{*b}_{CS-3}(W,\hat{\eta}_{-k})\right)m_Y(d,t,X)\right\rVert_2&\leq\left\lVert\hat{p}_{D}(1,X)_{-k}-p_D(1,X)\right\rVert_{\frac{2r}{r-2}}\times\left\lVert m_Y(d,t,X)\right\rVert_{r}\\ &\leq\left\lVert\hat{p}_{D}(1,X)_{-k}-p_D(1,X)\right\rVert_{\infty}^{\frac{r+2}{2r}}\times\left(\epsilon_{p_D(1,X)}\right)^\frac{r-2}{2r}\\ &=o_p(1) \end{align*} for all $d,t\in\{0,1\}$ using Assumption \ref{ass:estinfp}, the fact that $r>2$ and the convergence conditions stated.\\ Assumptions \ref{ass:estinfjoint} (iv) and (v) directly follow from the conditions stated. \item[(d)] For $\psi^{*b}_{CS-4}(W,\eta)=\frac{D}{p_{D}}$ the conditions in Assumption \ref{ass:estinfjoint} (i)-(iii) are trivially satisfied. Assumptions \ref{ass:estinfjoint} (iv) and (v) directly follow from the conditions stated. \item[(e)] For $\psi^{*b}_{CS-5}(W,\eta)=1$ the conditions in Assumption \ref{ass:estinfjoint} (i)-(iii) are trivially satisfied. Assumptions \ref{ass:estinfjoint} (iv) and (v) directly follow from the conditions stated. \end{itemize} \subsection{Proof of Corollary \ref{cor:estpa}}\label{app:estpa} $\mathbb{E}\left[\psi^*_{PA}(W,\eta;\theta)\right]=0$ using Assumptions \ref{ass:idpa} and the respective conditions (PA-1) or (PA-2). For all $\psi^{*b}_{PA}(W,\eta)$ it is easy to see that the conditions in Assumption \ref{ass:estinfb} and Assumption \ref{ass:estinfjoint} (i)-(iii) are satisfied. Assumptions \ref{ass:estinfjoint} (iv) and (v) directly follow from the conditions stated in the corollary. \subsection{Proof of Corollary \ref{cor:redcs2}}\label{app:redcs2} Assumption \ref{ass:estinfb} and Assumption \ref{ass:estinfjoint} (i)-(iii) are trivially satisfied when using $\psi'_{CS-2}(W;\theta)$. Notice that $m_Y(1,1,X)$ is redundant. Assumptions \ref{ass:estinfjoint} (iv) and (v) then directly follow from the conditions stated in the corollary. \subsection{Proof of Corollary \ref{cor:redcs4}}\label{app:redcs4} Assumption \ref{ass:estinfb} and Assumption \ref{ass:estinfjoint} (i)-(iii) follow similarly to the proof of Corollary \ref{cor:estcs} for $(d,t)\in\{(0,1),(0,0)\}$. Notice that $m_Y(1,1,X)$ and $m_Y(1,0,X)$ are redundant. Assumption \ref{ass:estinfb} and Assumption \ref{ass:estinfjoint} (i)-(iii) are then trivially satisfied for $(d,t)\in\{(1,1),(1,0)\}$. Since $m_Y(1,1,X)$ and $m_Y(1,0,X)$ are redundant, Assumptions \ref{ass:estinfjoint} (iv) and (v) then directly follow from the conditions stated in the corollary. \end{appendix} \end{document}
1602.08504
\section{Introduction} Type theories with dependent types originally were defined by Per Martin-L\"{o}f, who introduced several versions of the system \cite{MLTT72,MLTT73,MLTT79}. There were also several theories and extensions of Martin-L\"{o}f's theory proposed by different authors (\cite{CoC,luo94} to name a few). These theories may have different inference rules, different computation rules, and different constructions. Many of these theories have common parts and similar properties, but the problem is that there is no general definition of a type theory such that all of these theories would be a special case of this definition, so that their properties could be studied in general and applied to specific theory when necessary. In this paper we propose such a definition based on the notion of essentially algebraic theories. Another problem of the usual way of defining type theories is that they are not composable. Some constructions in type theories are independent of each other (such as $\Pi$, $\Sigma$, and $Id$ types), and others may dependent on other constructions (such as universes), so we could hope that we can study these constructions independently (at least if they are of the first kind) and deduce properties of combined theory from the properties of these basic constructions. But this is not the way it is usually done. For example, constructing models of dependent type theories is a difficult task because of the so called coherence problem. There are several proposed solutions to this problems, but the question we are interested in is how to combine them. Often only the categorical side of the question is considered, but some authors do consider specific theories \cite{streicher,pitts}, and the problem in this case is that their work cannot be applied to other similar theories (at least formally). When defining a type theory there are certain questions to be addressed regarding syntactic traits of the theory. One such question is how many arguments to different construction can be omitted and how to restore them when constructing a model of the theory. For example, we want to define application as a function of two arguments $app(f,a)$, but sometimes it is convenient to have additional arguments which allows to infer a type of $f$. It is possible to prove that additional information in the application term may be omitted (for example, see \cite{streicher}), but it is a nontrivial task. Another question of this sort is whether we should use a typed or an untyped equality. Typed equality is easier to handle when defining a model of the theory, but untyped is closer to actual implementation of the language. Algebraic approach allows us to separate these syntactic details from essential aspects of the theory. Yet another problem is that some constructions may be defined in several different ways. For example, $\Sigma$ types can be defined using projections (\rexample{sigma-eta}) and using an eliminator (\rexample{sigma-no-eta}). The question then is whether these definitions are equivalent in some sense. The difficulty of this question stems from the fact that some equivalences may hold in one definition judgmentally, but in the other only propositionally; so it may be difficult (or impossible) to construct a map from the first version of the definition to the second one. In this paper, using the formalism of essentially algebraic theories, we introduce the notion of \emph{algebraic dependent type theories} which provide a possible solution the problems described above. We define a category of algebraic dependent type theories. Coproducts and more generally colimits in this category allow us to combine simple theories into more complex ones. For example, the theory with $\Sigma$, $\Pi$ and $Id$ types may be described as coproduct $T_\Sigma \amalg T_\Pi \amalg T_{Id}$ where $T_\Sigma$, $T_\Pi$ and $T_{Id}$ are theories of $\Sigma$, $\Pi$ and $Id$ types respectively. There is a natural notion of a model of an essentially algebraic theory. Thus the algebraic approach to defining type theories automatically equips every type theory with a (locally presentable) category of its models. We will show that models of the initial theory are precisely contextual categories, and that models of an arbitrary theory are contextual categories with an additional structure (which depends on the theory). An example of a general construction that works for all theories with enough structure is the construction of a model structure on the category of models described in \cite{alg-models}. Since we have a category of type theories, there is a natural notion of equivalence between them, namely the isomorphism. In most cases this equivalence is too strong, so it is necessary to consider weaker notions of equivalence, but in some cases it might be useful. For example, if two theories differ only by the amount of arguments to some of the constructions, then they are isomorphic (assuming omitted arguments can be inferred from the rest). A weaker notion of equivalence of theories is Morita equivalence. Two theories are Morita equivalent if there is a Quillen equivalence between the categories of models of these theories. We will not consider this notion in this paper. Usually, we can use all constructions of a type theory in every context. We consider an additional structure on theories which allows us to do this. We call theories with this additional structure \emph{prestable}. Then, an algebraic dependent type theory is a prestable theory with substitutions which commute with every operation in the theory. We also consider \emph{stable} theories in which all axioms are stable under context extensions. If we think of models of a prestable theory as some sort of category with some additional structure, then the prestable structure allows us to pass to slices of this category. Then a prestable theory is stable if not only the category itself but also every slice category has this additional structure. The paper is organized as follows. In section 2, we define the category of partial Horn theories and discuss its properties. In section 3, we define an example of partial Horn theory and prove that the category of its models is equivalent to the category of contextual categories. In section 4, we define algebraic type theories and describe a simplified version of the syntax that can be used with these theories. In section 5, we give a few standard examples of such theories. In particular, we show that the construction that adds a universe to the system is functorial. \section{Partial Horn theories} \label{sec:PHT} There are several equivalent ways of defining essentially algebraic theories (\cite{LPC}, \cite{GAT}, \cite{PHL}, \cite[D 1.3.4]{elephant}). We use approach introduced in \cite{PHL} under the name of partial Horn theories since it is the most convenient one. There is a structure of a category on partial Horn theories. A \emph{generalized morphism} between theories $\mathbb{T}$ and $\mathbb{T}'$ is a model of $\mathbb{T}$ in $\mathcal{C}_{\mathbb{T}'}$, where $\mathcal{C}_{\mathbb{T}'}$ is the classifying category for $\mathbb{T}'$. We will work with theories that have some fixed set of sorts. Thus we need a notion of morphisms which preserve sorts. Of course, we could restrict the notion of a generalized morphism, but there is another definition of morphisms, which is more explicit. Let us recall the basic definitions from \cite{PHL}. A many sorted first-order signature $(\mathcal{S},\mathcal{F},\mathcal{P})$ consists of a set $\mathcal{S}$ of sorts, a set $\mathcal{F}$ of function symbols and a set $\mathcal{P}$ of predicate symbols. Each function symbol $\sigma$ is equipped with a signature of the form $\sigma : s_1 \times \ldots \times s_k \to s$, where $s_1$, \ldots $s_k$, $s$ are sorts. Each predicate symbol $R$ is equipped with a signature of the form $R : s_1 \times \ldots \times s_k$. An atomic formula is an expression either of the form $t_1 = t_2$ or of the form $R(t_1, \ldots t_n)$, where $R$ is a predicate symbol and $t_1$, \ldots $t_n$ are terms. We abbriviate $t = t$ to $t\!\downarrow$. A Horn formula is an expression of the form $\varphi_1 \land \ldots \land \varphi_n$, where $\varphi_1$, \ldots $\varphi_n$ are atomic formulas. A sequent is an expression of the form $\varphi \sststile{}{x_1, \ldots x_n} \psi$, where $x_1$, \ldots $x_n$ are variables and $\varphi$ and $\psi$ are Horn formulas such that $FV(\varphi) \cup FV(\psi) \subseteq \{ x_1, \ldots x_n \}$. A \emph{partial Horn theory} consists of a signature and a set of Horn sequents in this signature. Let $V$ be an $\mathcal{S}$-set. Then the $\mathcal{S}$-set of terms of $\mathbb{T}$ with free variables in $V$ will be denoted by $Term_\mathbb{T}(V)$. The set of fomulas of $\mathbb{T}$ with free variables in $V$ will be denoted by $Form_\mathbb{T}(V)$. An $\mathcal{S}$-set $M$ is a collection of sets $\{ M_s \}_{s \in \mathcal{S}}$. An interpretation $M$ of a signature $(\mathcal{S},\mathcal{F},\mathcal{P})$ is an $\mathcal{S}$-set $M$ together with a collection of \emph{partial} functions $M(\sigma) : M_{s_1} \times \ldots \times M_{s_k} \to M_s$ for every function symbol $\sigma : s_1 \times \ldots \times s_k \to s$ of $\mathbb{T}$ and relations $M(R) \subseteq M_{s_1} \times \ldots \times M_{s_k}$ for every predicate symbol $R : s_1 \times \ldots \times s_k$. A model of a partial Horn theory $\mathbb{T}$ is an interpretation of the underlying signature such that the axioms of $\mathbb{T}$ hold in this interpretation. The category of models of $\mathbb{T}$ will be denoted by $\Mod{\mathbb{T}}$. The rules of \emph{partial Horn logic} are listed below. A \emph{theorem} of a partial Horn theory $\mathbb{T}$ is a sequent derivable from $\mathbb{T}$ in this logic. \begin{center} $\varphi \sststile{}{V} \varphi$ \axlabel{b1} \qquad \AxiomC{$\varphi \sststile{}{V} \psi$} \AxiomC{$\psi \sststile{}{V} \chi$} \RightLabel{\axlabel{b2}} \BinaryInfC{$\varphi \sststile{}{V} \chi$} \DisplayProof \qquad $\varphi \sststile{}{V} \top$ \axlabel{b3} \end{center} \medskip \begin{center} $\varphi \land \psi \sststile{}{V} \varphi$ \axlabel{b4} \qquad $\varphi \land \psi \sststile{}{V} \psi$ \axlabel{b5} \qquad \AxiomC{$\varphi \sststile{}{V} \psi$} \AxiomC{$\varphi \sststile{}{V} \chi$} \RightLabel{\axlabel{b6}} \BinaryInfC{$\varphi \sststile{}{V} \psi \land \chi$} \DisplayProof \end{center} \medskip \begin{center} $\sststile{}{x} x\!\downarrow$ \axlabel{a1} \qquad $x = y \land \varphi \sststile{}{V,x,y} \varphi[y/x]$ \axlabel{a2} \end{center} \medskip \begin{center} \AxiomC{$\varphi \sststile{}{V} \psi$} \RightLabel{, $x \in FV(\varphi)$ \axlabel{a3}} \UnaryInfC{$\varphi[t/x] \sststile{}{V,V'} \psi[t/x]$} \DisplayProof \end{center} \medskip Note that this set of rules is equivalent to the one described in \cite{PHL}. In particular, the following sequents are derivable if $x \in FV(t)$: \begin{align*} R(t_1, \ldots t_k) & \sststile{}{V} t_i = t_i \axtag{a4} \\ t_1 = t_2 & \sststile{}{V} t_i = t_i \axtag{a4'} \\ t[t'/x]\!\downarrow & \sststile{}{V} t' = t' \axtag{a5} \end{align*} We will use the following abbreviations: \begin{align*} \varphi \sststile{}{V} t \cong s & \Longleftrightarrow \varphi \land t\!\downarrow\,\sststile{}{V} t = s \text{ and } \varphi \land s\!\downarrow\,\sststile{}{V} t = s \\ \varphi \ssststile{}{V} \psi & \Longleftrightarrow \varphi \sststile{}{V} \psi \text{ and } \psi \sststile{}{V} \varphi \end{align*} Let $\mathbb{T}$ be a partial Horn theory. A \emph{restricted term} of $\mathbb{T}$ is a term $t$ together with a formula $\varphi$. We denote such a restricted term by $t|_\varphi$. The $\mathcal{S}$-set of restricted terms with free variables in $V$ will be denoted by $RTerm_\mathbb{T}(V)$. If we think of terms as representations for partial functions, then we can think of a restricted term $t|_\varphi$ as a restriction of the partial function represented by $t$ to a subset of its domain. We will use the following abbreviations: \begin{align*} R(t_1|_{\varphi_1}, \ldots t_k|_{\varphi_k}) & \Longleftrightarrow R(t_1, \ldots t_k) \land \varphi_1 \land \ldots \land \varphi_k \\ t|_\varphi = s|_\psi & \Longleftrightarrow t = s \land \varphi \land \psi \\ t|_\varphi\!\downarrow & \Longleftrightarrow t\!\downarrow\!\land \varphi \\ \chi \sststile{}{V} t|_\varphi \cong s|_\psi & \Longleftrightarrow \chi \land t|_\varphi\!\downarrow\,\sststile{}{V} t = s \land \psi \text{ and } \chi \land s|_\psi\!\downarrow\,\sststile{}{V} t = s \land \varphi \end{align*} We will say that formulas $\varphi$ and $\psi$ are equivalent if the following sequents are derivable: \[ \varphi \ssststile{}{FV(\varphi) \cup FV(\psi)} \psi \] We will say that restricted terms $t$ and $t'$ are equivalent if the following sequents are derivable: \[ \sststile{}{FV(t) \cup FV(t')} t \cong t' \] Let $\mathbb{T}$ and $\mathbb{T}'$ be partial Horn theories with the same set of sorts $\mathcal{S}$. An \emph{interpretation} of $\mathbb{T}$ in $\mathbb{T}'$ is a function $f$ such that the following conditions hold: \begin{enumerate} \item For every function symbol $\sigma : s_1 \times \ldots \times s_k \to s$ of $\mathbb{T}$, the function $f$ defines a restricted term $f(\sigma)$ of $\mathbb{T}'$ of sort $s$ such that $FV(f(\sigma)) = \{ x_1 : s_1, \ldots x_k : s_k \}$. \item For every predicate symbol $P : s_1 \times \ldots \times s_k$, the function $f$ defines a formula $f(P)$ of $\mathbb{T}'$ such that $FV(f(P)) = \{ x_1 : s_1, \ldots x_k : s_k \}$. \item For every axiom $\varphi \sststile{}{V} \psi$ of $\mathbb{T}$, the sequent $f(\varphi) \sststile{}{V} f(\psi)$ is derivable in $\mathbb{T}'$. \end{enumerate} We will say that interpretations $f$ and $f'$ are equivalent if, for every predicate symbol $P : s_1 \times \ldots \times s_k$ of $\mathbb{T}$, the formulas $f(P)$ and $f'(P)$ are equivalent and, for every function symbol $\sigma : s_1 \times \ldots \times s_k \to s$ of $\mathbb{T}$, the terms $f(\sigma)$ and $f'(\sigma)$ are also equivalent. A \emph{morphism} of theories $\mathbb{T}$ and $\mathbb{T}'$ is an equivalence class of interpretations. The identity morphisms are defined in the obvious way. To define the composition of morphisms, we need to extend the definition of a function $f : \mathbb{T} \to \mathbb{T}'$ to terms and formulas. Let $t$ be a term of $\mathbb{T}$ of sort $s$. Then we define a restricted term $f(t)$ of $\mathbb{T}'$ by induction on $t$. If $t = x$ is a variable, then let $f(t) = x$. If $t = \sigma(t_1, \ldots t_k)$, $f(\sigma) = t'|_\varphi$ and $f(t_i) = t'_i|_{\varphi_i}$, then let $f(t) = t'[t'_1/x_1, \ldots t'_k/x_k]|_{\varphi[t'_1/x_1, \ldots t'_k/x_k] \land \varphi_1 \land \ldots \land \varphi_k}$. Let $\varphi$ be a formula of $\mathbb{T}$. Then we define a formula $f(\varphi)$ of $\mathbb{T}'$. If $\varphi$ equals to $t_1 = t_2$ and $f(t_i)$ equals to $t'_i|_{\varphi_i}$, then we define $f(\varphi)$ as $t'_1 = t'_2 \land \varphi_1 \land \varphi_2$. If $\varphi = R(t_1, \ldots t_k)$, $f(R) = \varphi'$ and $f(t_i) = t'_i|_{\varphi_i}$, then we define $f(\varphi)$ as $\varphi'[t'_1/x_1, \ldots t'_k/x_k] \land \varphi_1 \land \ldots \land \varphi_k$. For every restricted term $t|_\varphi$ of $\mathbb{T}$, we define $f(t|_\varphi)$ as $f(t)|_{f(\varphi)}$. Now, we can define the composition of $f : \mathbb{T} \to \mathbb{T}'$ and $g : \mathbb{T}' \to \mathbb{T}''$ as follows: $(g \circ f)(S) = g(f(S))$ for every symbol $S$ of $\mathbb{T}$. It is easy to see that this definition respect the equivalence of morphisms. It is obvious that, for every morphism $f : \mathbb{T} \to \mathbb{T}'$ of theories, we have $f \circ id_\mathbb{T} = id_{\mathbb{T}'} \circ f = f$. Note that for every morphisms $f : \mathbb{T} \to \mathbb{T}'$ and $g : \mathbb{T}' \to \mathbb{T}''$ and every term $t$, the restricted terms $g(f(t))$ and $(g \circ f)(t)$ are equivalent. This is easy to do by induction on $t$. Similarly, for every formula $\varphi$ of $\mathbb{T}$, the formulas $g(f(\varphi))$ and $(g \circ f)(\varphi)$ are equivalent. It follows that the composition is associative. The category of partial Horn theory with $\mathcal{S}$ as the set of sorts will be denoted by $\cat{Th}_\mathcal{S}$. Its objects are tuples $(\mathcal{F},\mathcal{P},\mathcal{A})$, where $\mathcal{F}$ is a set of function symbols, $\mathcal{P}$ is a set of predicate symbols, and $\mathcal{A}$ is a set of axioms. \begin{prop}[th-cocomplete] The category $\cat{Th}_\mathcal{S}$ is cocomplete. \end{prop} \begin{proof} First, let $\{ \mathbb{T}_i \}_{i \in S} = \{ (\mathcal{F}_i,\mathcal{P}_i,\mathcal{A}_i) \}_{i \in S}$ be a set of theories. Then we can define its coproduct $\coprod\limits_{i \in S} \mathbb{T}_i$ as the theory $(\coprod\limits_{i \in S} \mathcal{F}_i, \coprod\limits_{i \in S} \mathcal{P}_i, \coprod\limits_{i \in S} \mathcal{A}_i)$. Morphisms $f_i : \mathbb{T}_i \to \coprod\limits_{i \in S} \mathbb{T}_i$ are defined in the obvious way. It is easy to see that the universal property of coproducts holds. Now, let $f,g : \mathbb{T}_1 \to \mathbb{T}_2$ be a pair of morphisms of theories. Then we can define their coequalizer $\mathbb{T}$ as the theory with the same set of function and predicate symbols as $\mathbb{T}_2$ and the set of axioms which consists of the axioms of $\mathbb{T}_2$ together with $\sststile{}{x_1, \ldots x_n} f(\sigma(x_1, \ldots x_n)) \cong g(\sigma(x_1, \ldots x_n))$ for each function symbols $\sigma$ of $\mathbb{T}_1$ and $f(R(x_1, \ldots x_n)) \ssststile{}{x_1, \ldots x_n} g(R(x_1, \ldots x_n))$ for each predicate symbols $R$ of $\mathbb{T}_1$. Then we can define $e : \mathbb{T}_2 \to \mathbb{T}$ as the identity function on terms and formulas. By construction, we have $e \circ f = e \circ g$. If $h : \mathbb{T}_2 \to X$ is such that $h \circ f = h \circ g$, then it extends to a morphism $\mathbb{T} \to X$ since additional axioms are preserved by the assumption on $h$. This extension is unique since $e$ is an epimorphism. \end{proof} \begin{prop}[func-mod] For every morphism of theories $f : \mathbb{T} \to \mathbb{T}'$, there is a faithful functor $f^* : \Mod{T'} \to \Mod{T}$ such that $id_\mathbb{T}^*$ is the identity functor and $(g \circ f)^* = f^* \circ g^*$. \end{prop} \begin{proof} If $M$ is a model of $\mathbb{T}'$, then $f^*(M)$ equals to $M$ as an $\mathcal{S}$-set. For every symbol $S$ of $\mathbb{T}'$, we define $f^*(M)(S)$ as $M(f(S))$. Then every morphism of models $M$ and $N$ of $\mathbb{T}'$ is also a morphism of $f^*(M)$ and $f^*(N)$. These definitions determine a faithful functor $f^* : \Mod{T'} \to \Mod{T}$. It is easy to see that these functors satisfy the required conditions. \end{proof} \section{Theory of substitutions} \label{sec:T1} In this section we define an example of partial Horn theories $\mathbb{S}$, which we call the theory of substitutions. We also prove that the category of models of this theory is equivalent to the category of contextual categories We will use this theory later to define algebraic dependent type theories. \subsection{Definition of $\mathbb{S}$} \label{sec:T1-def} Let $\mathcal{C} = \{ ctx, tm \} \times \mathbb{N}$ be the set of sorts. We will write $(ty,n)$ for $(ctx,n+1)$. Sort $(tm,n)$ represents terms in contexts of length $n$, sort $(ctx,n)$ represents contexts of length $n$, and sort $(ty,n)$ represents types in contexts of length $n$. There are two ways to define substitution: either to substitute the whole context (full substitution) or only a part of it (partial substitution). Using ordinary type theoretic syntax the full substitution can be described by the following inference rule: \begin{center} \AxiomC{$A_1, \ldots A_n \vdash A\ type$} \AxiomC{$\Gamma \vdash a_1 : A_1[]$ \quad \ldots \quad $\Gamma \vdash a_n : A_n[a_1, \ldots a_{n-1}]$} \BinaryInfC{$\Gamma \vdash A[a_1, \ldots a_n]\ type$} \DisplayProof \end{center} \medskip The partial substitution is described by the following inference rule: \begin{center} \AxiomC{$\Gamma, A_1, \ldots A_n \vdash A\ type$} \AxiomC{$\Gamma \vdash a_1 : A_1$ \quad \ldots \quad $\Gamma \vdash a_n : A_n[a_1, \ldots a_{n-1}]$} \BinaryInfC{$\Gamma \vdash A[a_1, \ldots a_n]\ type$} \DisplayProof \end{center} \medskip The partial substitution was used in \cite{b-systems}, but we will use the full version since it is stronger. To make these operations equivalent, we need to add another operation to the partial substitution, and even more axioms. Thus our approach seems to be somewhat more convenient. The set of function symbols of $\mathbb{S}$ consists of the following symbols: \begin{align*} * & : (ctx,0) \\ ft_n & : (ty,n) \to (ctx,n) \\ ty_n & : (tm,n) \to (ty,n) \\ v_{n,i} & : (ctx,n) \to (tm,n) \text{, } 0 \leq i < n \\ subst_{p,n,k} & : (ctx,n) \times (p,k) \times (tm,n)^k \to (p,n) \text{, } p \in \{ tm, ty \} \end{align*} Let $ft^i_n : (ctx,n+i) \to (ctx,n)$ and $ctx_{p,n} : (p,n) \to (ctx,n)$ be the following derived operations: \begin{align*} ft^0_n(A) & = A \\ ft^{i+1}_n(A) & = ft^i_n(ft_{n+i}(A)) \\ ctx_{ty,n}(t) & = ft_n(t) \\ ctx_{tm,n}(t) & = ft_n(ty_n(t)) \end{align*} Auxiliary predicates $Hom_{n,k} : (ctx,n) \times (ctx,k) \times (tm,n)^k$ are defined as follows: $Hom_{n,k}(B, A, a_1, \ldots a_k)$ holds if and only if \[ ty_n(a_i) = subst_{ty,n,i-1}(B, ft^{k-i}_i(A), a_1, \ldots a_{i-1}) \text{ for each } 1 \leq i \leq k \] The idea is that a tuple of terms should represent a morphism in a contextual category. So $Hom_{n,k}(B, A, a_1, \ldots a_k)$ holds if and only if $(a_1, \ldots a_k)$ is a morphism with domain $A$ and codomain $B$. Note that if $Hom_{n,k}(B, A, a_1, \ldots a_k)$, then $ft_n(ty_n(a_i)) = B$. The set of axioms of $\mathbb{S}$ consists of the axioms asserting that $(ctx,0)$ is trivial and the axioms we list below. The following axioms describe when functions are defined: \begin{align} \label{ax:def-var} & \sststile{}{A} v_{n,i}(A) \downarrow \\ \label{ax:def-subst} Hom_{n,k}(B, ctx_{p,k}(a), a_1, \ldots a_k) & \ssststile{}{B, a, a_i} subst_{p,n,k}(B, a, a_1, \ldots a_k) \downarrow \end{align} The following axioms describe the ``typing'' of the constructions we have: \begin{align} \label{ax:type-var} & \sststile{}{A} ty_n(v_{n,i}(A)) = subst_{ty,n,n-i-1}(A, ft^i_{n-i}(A), v_{n,n-1}(A), \ldots v_{n,i+1}(A)) \\ \label{ax:type-subst-ty} & Hom_{n,k}(B, ft_k(A), a_1, \ldots a_k) \sststile{}{B, A, a_i} ft_n(subst_{ty,n,k}(B, A, a_1, \ldots a_k)) = B \\ \label{ax:type-subst-tm} & \sststile{}{B, a, a_i} ty_n(subst_{tm,n,k}(B, a, a_1, \ldots a_k)) \cong subst_{ty,n,k}(B, ty_k(a), a_1, \ldots a_k) \end{align} The following axioms prescribe how $subst_{p,n,k}$ must be defined on indices ($v_{n,i}$): \begin{align} \label{ax:subst-var} & \sststile{}{a} subst_{p,n,n}(ctx_{p,n}(a), a, v_{n,n-1}(ctx_{p,n}(a)), \ldots v_{n,0}(ctx_{p,n}(a))) = a \\ \label{ax:var-subst} & Hom_{n,k}(B, A, a_1, \ldots a_k) \sststile{}{B, a_i, A} subst_{tm,n,k}(B, v_{k,i}(A), a_1, \ldots a_k) = a_{k-i} \end{align} The last axiom say that substitution must be ``associative'': \begin{align} \label{ax:subst-subst} & Hom_{n,k}(C, B, b_1, \ldots b_k) \land Hom_{k,m}(B, ctx_{p,m}(a), a_1, \ldots a_m) \sststile{}{C, b_i, B, a_i, a} \\ \notag & subst_{p,n,k}(C, subst_{p,k,m}(B, a, a_1, \ldots a_m), b_1, \ldots b_k) = \\ \notag & subst_{p,n,m}(C, a, subst_{tm,n,k}(C, a_1, b_1, \ldots b_k), \ldots subst_{tm,n,k}(C, a_m, b_1, \ldots b_k)) \end{align} \subsection{Models of $\mathbb{S}$} Here we show that the category of models of $\mathbb{S}$ is equivalent to the category of contextual categories. First, we construct a functor $F : \Mod{\mathbb{S}} \to \cat{CCat}$. Let $M$ be a model of $\mathbb{S}$. Then the set of objects of level $n$ of $F(M)$ is $M_{(ctx,n)}$. For each $A \in M_{(ctx,n)}$, $B \in M_{(ctx,k)}$ morphisms from $A$ to $B$ are tuples $(a_1, \ldots a_k)$ such that $a_i \in M_{(tm,n)}$ and $Hom_{n,k}(A, B, a_1, \ldots a_k)$. For each $0 \leq i \leq n$ axiom~\eqref{ax:type-var} implies \[ \sststile{}{A} Hom_{n,n-i}(A, ft^i_{n-i}(A), v_{n,n-1}(A), \ldots v_{n,i}(A)). \] For each $A \in M_{(ctx,n)}$ we define $id_A : A \to A$ as tuple \[ (v_{n,n-1}(A), \ldots v_{n,0}(A)) \] and $p_A : A \to ft(A)$ as tuple \[ (v_{n,n-1}(A), \ldots v_{n,1}(A)). \] Now, we introduce some notation. If $B \in M_{(ctx,n)}$, $a \in M_{(p,k)}$, and $f = (a_1, \ldots a_k) : B \to ctx_{p,k}(a)$ is a morphism, then we define $a[f] \in M_{(p,n)}$ as $subst_{p,n,k}(B, a, a_1, \ldots a_k)$. By axiom \eqref{ax:def-subst} this construction is total. If $A \in M_{(ctx,n)}$, $B \in M_{(ctx,k)}$, $C \in M_{(ctx,m)}$, $f : A \to B$, and $(c_1, \ldots c_m) : B \to C$, then we define composition $(c_1, \ldots c_m) \circ f$ as $(c_1[f], \ldots c_m[f])$. The following sequence of equations shows that $(c_1, \ldots c_m) \circ f : A \to C$. \begin{align*} ty_n(c_i[f]) & = \text{(by axiom~\eqref{ax:type-subst-tm})} \\ ty_k(c_i)[f] & = \text{(since $Hom_{k,m}(c_1, \ldots c_m)$)} \\ ft^{m-i}_i(C)[c_1, \ldots c_{i-1}][f] & = \text{(by axiom~\eqref{ax:subst-subst})} \\ ft^{m-i}_i(C)[c_1[f], \ldots c_{i-1}[f]] & \end{align*} With these notations we can rewrite axioms \eqref{ax:type-subst-tm}, \eqref{ax:subst-var} and \eqref{ax:subst-subst} as follows: \begin{align*} ty_n(a[f]) & = A[f] \\ \text{ for each } f : B \to ft_k(A) & \text{, where } A = ty_k(a) \\ a[id_{ctx_{p,n}(a)}] & = a \\ a[g][f] & = a[g \circ f] \\ \text{ for each } f : C \to B \text{ and } & g : B \to ctx_{p,m}(a) \end{align*} Associativity of the composition follows from axiom~\eqref{ax:subst-subst}, and the fact that $id$ is identity for it follows from axioms \eqref{ax:subst-var} and \eqref{ax:var-subst}. For every $A \in M_{(ty,k)}$ there is a bijection $\varphi$ between the set of $a \in M_{(tm,k)}$ such that $ty_k(a) = A$ and the set of morphisms $f : ft_k(A) \to A$ such that $p_A \circ f = id_{ft_k(A)}$. For every such $a \in M_{(tm,k)}$ we define $\varphi(a)$ as \[ (v_{k,k-1}(ft_k(A)), \ldots v_{k,0}(ft_k(A)), a). \] Note that if $(a_1, \ldots a_{k+1}) : B \to A$ is a morphism, then axiom~\eqref{ax:var-subst} implies that $p_A \circ (a_1, \ldots a_{k+1})$ equals to $(a_1, \ldots a_k)$. Thus $\varphi(a)$ is a section of $p_A$. Clearly, $\varphi$ is injective. Let $f : ft_k(A) \to A$ be a section of $p_A$; then first $k$ components of $f$ must be identity on $ft_k(A)$. So if $a$ is the last component of $f$, then $\varphi(a)$ equals to $f$. Hence $\varphi$ is bijective. If $A \in M_{(ty,k)}$, $B \in M_{(ctx,n)}$, and $f = (a_1, \ldots a_k) : B \to ft_k(A)$, then we define $f^*(A)$ as $A[f] = subst_{ty,n,k}(B, A, a_1, \ldots a_k)$. Map $q(f,B)$ defined as the tuple with $i$-th component equals to \[ \left\{ \begin{array}{lr} a_i[v_{n+1,n}(A[f]), \ldots v_{n+1,1}(A[f])] & \text{ if } 1 \leq i \leq k \\ v_{n+1,0}(A[f]) & \text{ if } i = k+1 \end{array} \right. \] Now we have the following commutative square: \[ \xymatrix{ A[f] \ar[r]^-{q(f,A)} \ar[d]_{p_{A[f]}} & A \ar[d]^{p_A} \\ B \ar[r]_-f & ft_k(A) } \] We need to prove that this square is Cartesian. By proposition~2.3 of \cite{c-systems} it is enough to construct a section $s_{f'} : B \to A[f]$ of $p_{A[f]}$ for each $f' = (a_1, \ldots a_k, a_{k+1}) : B \to A$ and prove a few properties of $s_{f'}$. We define $s_{f'}$ to be equal to $\varphi(a_{k+1})$. Axioms \eqref{ax:var-subst} and \eqref{ax:subst-subst} implies that $q(f,B) \circ s_{f'} = f$. To complete the proof that the square above is Cartesian we need, for every $g : ft_k(A) \to ft_m(C)$ and $A = C[g]$, prove that $s_{f'} = s_{q(g,C) \circ f'}$. The last component of $q(g,C) \circ f'$ equals to $v_{n+1,0}(C[g])[f'] = a_{k+1}$. Thus the last components of $q(g,C) \circ f'$ and $f'$ coincide, hence $s_{f'} = s_{q(g,C) \circ f'}$. We are left to prove that operations $A[f]$ and $q(f,A)$ are functorial. Equations $A[id_{ft_k(A)}] = A$ and $A[f \circ g] = A[f][g]$ are precisely axioms \eqref{ax:subst-var} and \eqref{ax:subst-subst}. The fact that $q(id_{ft_k(A)}, A) = id_A$ follows from axiom~\ref{ax:var-subst}. Now let $g : C \to B$ and $f : B \to ft_k(A)$ be morphisms; we need to show that $q(f \circ g, A) = q(f,A) \circ q(g,A[f])$. The last component of $q(f,A) \circ q(g,A[f])$ equals to $v_{n+1,0}(A[f])[q(g,A[f])] = v_{m+1,0}(A[f][g])$, which equals to the last component of $q(f \circ g, A)$, namely $v_{m+1,0}(A[f \circ g])$. If $1 \leq i \leq k$, then $i$-th component of $q(f,A) \circ q(g,A[f])$ equals to \[ a_i[v_{n+1,n}(A[f]), \ldots v_{n+1,1}(A[f])][q(g,A[f])] = a_i[b_1', \ldots b_n'] \] where $a_i$ is $i$-th component of $f$, $b_i$ is $i$-th component of $g$, and $b_i'$ equals to $b_i[v_{m+1,m}(A[f][g]), \ldots v_{m+1,1}(A[f][g])]$. $i$-th component of $q(f \circ g, A)$ equals to \[ a_i[g][v_{m+1,m}(A[f \circ g]), \ldots v_{m+1,1}(A[f \circ g])] = a_i[b_1'', \ldots b_n''], \] where $b_i'' = b_i[v_{m+1,m}(A[f \circ g]), \ldots v_{m+1,1}(A[f \circ g])]$. Thus $q(f \circ g, A) = q(f,A) \circ q(g,A[f])$. This completes the construction of contextual category $F(M)$. \begin{prop}[T1-CCat] $F$ is functorial, and functor $F : \Mod{\mathbb{S}} \to \cat{CCat}$ is an equivalence of categories. \end{prop} \begin{proof} Given a map of $\mathbb{S}$ models $\alpha : M \to N$, we define a map of contextual categories $F(\alpha) : F(M) \to F(N)$. $F(\alpha)$ is already defined on objects. Let $f = (a_1, \ldots a_k) \in Hom_{n,k}(B,A)$. We define $F(\alpha)(f)$ as $(\alpha(a_1), \ldots \alpha(a_k)) \in Hom_{n,k}(\alpha(B), \alpha(A))$. $F(\alpha)$ preserves identity morphisms, compositions, $f^*(A)$, and $q(f,A)$ since all of these operations are defined in terms of $\mathbb{S}$ operations. Clearly, $F$ preserves identity maps and compositions of maps of $\mathbb{S}$ models. Thus $F$ is a functor. First, note that if $a \in M_{(tm,k)}$ and $\alpha : M \to N$, then $F(\alpha)(\varphi(a)) = \varphi(\alpha(a))$. Indeed, consider the following sequence of equations: \begin{align*} F(\alpha)(\varphi(a)) & = \\ F(\alpha)(v_{k,k-1}(ctx_{tm,k}(a)), \ldots v_{k,0}(ctx_{tm,k}(a)), a) & = \\ (v_{k,k-1}(ctx_{tm,k}(\alpha(a))), \ldots v_{k,0}(ctx_{tm,k}(\alpha(a))), \alpha(a)) & = \\ \varphi(\alpha(a)) & . \end{align*} Now, we prove that $F$ is faithful. Let $\alpha,\beta : M \to N$ be a pair of maps of $\mathbb{S}$ models such that $F(\alpha) = F(\beta)$. Then $\alpha$ and $\beta$ coincide on contexts. Given $a \in M_{(tm,n)}$ we have the following equation: $\alpha(a) = \varphi^{-1}(F(\alpha)(\varphi(a))) = \varphi^{-1}(F(\beta)(\varphi(a))) = \beta(a)$. Now, we prove that $F$ is full. Let $\alpha : F(M) \to F(N)$ be a map of contextual categories. Then we need to define $\beta : M \to N$ such that $F(\beta) = \alpha$. If $A \in M_{(ctx,n)}$, then we let $\beta(A) = \alpha(A)$. Note that if $f : ft_n(A) \to A$ is a section of $p_A$, then $\alpha(f)$ is a section of $\alpha(A)$. If $a \in M_{(tm,n)}$, then we let $\beta(a) = \varphi^{-1}(\alpha(\varphi(a)))$. Maps $F(\beta)$ and $\alpha$ agree on contexts. We prove by induction on $k$ that they coincide on morphisms $f = (a_1, \ldots a_k) \in M(Hom_{n,k})(B,A)$. If $k = 0$, then $F(A)$ is terminal objects, hence $F(\beta) = \alpha$. Suppose $k > 0$ and consider the following equation: $f = q((a_1, \ldots a_{k-1}), A) \circ \varphi(a_k)$. By induction hypothesis we know that $F(\beta)(q((a_1, \ldots a_{k-1}), A)) = \alpha(q((a_1, \ldots a_{k-1}), A))$. Thus we only need to prove that $F(\beta)(\varphi(a_k)) = \alpha(\varphi(a_k))$. But $F(\beta)(\varphi(a_k)) = \varphi(\beta(a_k)) = \varphi(\varphi^{-1}(\alpha(\varphi(a_k)))) = \alpha(\varphi(a_k))$. Finally, we prove that $F$ is essentially surjective on objects. Given contextual category $C$ we define $\mathbb{S}$ model $M$. Let $M_{(ctx,n)}$ be equal to $Ob_n(C)$ and $M_{(tm,n)}$ be the set of pairs of objects $A \in Ob_{n+1}(C)$ and sections of $p_A : A \to ft_n(A)$. Let $ty_n$ be the obvious projection. We will usually identify $a \in M_{(tm,n)}$ with the section $ctx_{tm,n}(a) \to ty_n(a)$. For each $n,k \in \mathbb{N}$ we define partial function \[ subst_{ty,n,k} : M_{(ctx,n)} \times M_{(ty,k)} \times M_{(tm,n)}^k \to M_{(ty,n)} \] such that $ft_n(subst_{ty,n,k}(B, A, a_1, \ldots a_k)) = B$. We also define morphism \[ q_{n,k} \in Hom_{n+1,k}(subst_{ty,n,k}(B, A, a_1, \ldots a_k), A) \] whenever $subst_{ty,n,k}(B, A, a_1, \ldots a_k)$ is defined. We define $subst_{ty,n,k}$ and $q_{n,k}$ by induction on $k$. Let $subst_{ty,n,0}(B,A) = !_B^*(A)$ and $q_{n,0} = q(!_B,A)$ where $!_B : B \to Ob_0(C)$ is the unique morphism. \[ \xymatrix{ subst_{ty,n,0}(B,A) \ar[r]^-{q_{n,0}} \ar[d] \pb & A \ar[d]^{p_A} \\ B \ar[r]_{!_B} & 1 } \] Let $subst_{ty,n,k+1}(B, A, a_1, \ldots a_{k+1})$ be defined whenever $subst_{ty,n,k}(B, ft_k(A), \allowbreak a_1, \ldots a_k)$ is defined and $ty_n(a_{k+1}) = subst_{ty,n,k}(B, ft_k(A), a_1, \ldots a_k)$. In this case we let $subst_{ty,n,k+1}(B, A, a_1, \ldots a_{k+1}) = f^*(A)$ and $q_{n,k+1} = q(f,A)$ where $f$ is the composition of $a_{k+1}$ and $q_{n,k}$. \[ \xymatrix{ subst_{ty,n,k+1}(B, A, a_1, \ldots a_{k+1}) \ar[rr]^-{q_{n,k+1}} \ar[d] \pb & & A \ar[d]^{p_A} \\ B \ar[r]_-{a_{k+1}} & ty_n(a_{k+1}) \ar[r]_-{q_{n,k}} & ft_k(A) } \] It is easy to see by induction on $k$ that axiom~\eqref{ax:def-subst} holds. Axiom~\eqref{ax:type-subst-ty} holds by definition of $subst_{ty,n,k}$. The definition of predicates $Hom_{n,k}$ makes sense in $M$ now. Thus we can define as before the set $Hom^M_{n,k}(B,A)$ of morphisms in $M$ as the set of tuples $(a_1, \ldots a_k)$ such that $Hom_{n,k}(B, A, a_1, \ldots a_k)$. There is a bijection $\alpha : Hom^M_{n,k}(B,A) \to Hom_{n,k}(B,A)$ such that $subst_{ty,n,k}(B, A, a_1, \ldots a_k) = \alpha(a_1, \ldots a_k)^*(A)$ and $q_{n,k} = q(\alpha(a_1, \ldots a_k), A)$. We define $\alpha$ by induction on $k$. Both $Hom^M_{n,0}(B,A)$ and $Hom_{n,0}(B,A)$ are singletons, so there is a unique bijection between them. If $(a_1, \ldots a_k) \in Hom^M_{n,k}(B,ft_k(A))$, then there is a bijection between morphisms $f \in Hom_{n,k+1}(B,A)$ satisfying $p_A \circ f = \alpha(a_1, \ldots a_k)$ and sections of $p_{\alpha(a_1, \ldots a_k)^*(A)}$. By induction hypothesis these sections are just sections of $p_{subst_{ty,n,k}(B, A, a_1, \ldots a_k)}$. This gives us a bijection between $Hom^M_{n,k+1}(B,A)$ and $Hom_{n,k+1}(B,A)$, namely $\alpha(a_1, \ldots a_{k+1}) = q(\alpha(a_1, \ldots a_k), A) \circ a_{k+1}$. Then the required equations hold by definition. Now, we define total functions $v_{n,i} : M_{(ctx,n)} \to M_{(tm,n)}$. Let $v_{n,i}(A)$ be equal to $(p^{i+1}(A)^*(ft^i_{n-i}(A)), s_{p^i_A})$. \[ \xymatrix{ p^{i+1}(A)^*(ft^i_{n-i}(A)) \ar[r] \ar[d] \pb & ft^i_{n-i}(A) \ar[d]^{p_{ft^i_{n-i}(A)}} \\ A \ar[r]_{p^{i+1}(A)} \ar@/^1pc/[u]^{s_{p^i_A}} \ar[ur]_{p^i_A} & ft^{i+1}_{n-i-1}(A) } \] Axiom~\eqref{ax:def-var} holds by definition. By induction on $n-i$ it is easy to see that $\alpha(v_{n,n-1}(A), \ldots v_{n,i}(A))$ equals to $p_A^i : A \to ft^i_{n-i}(A)$. Axiom~\eqref{ax:type-var} follows from the following sequence of equations: \begin{align*} subst_{ty,n,n-i-1}(A, ft^i_{n-i}(A), v_{n,n-1}(A), \ldots v_{n,i+1}(A)) & = \\ \alpha(v_{n,n-1}(A), \ldots v_{n,i+1}(A))^*(ft^i_{n-i}(A)) & = \\ p^{i+1}(A)^*(ft^i_{n-i}(A)) & = \\ ty_n(v_{n,i}(A)) & . \end{align*} Axiom~\eqref{ax:subst-var} follows from the facts that $\alpha(v_{n,n-1}(ft_n(A)), \ldots v_{n,0}(ft_n(A))) = id_{ft_n(A)}$ and $id_{ft_n(A)}^*(A) = A$. Now, we define partial functions $subst_{tm,n,k} : M_{(ctx,n)} \times M_{(tm,k)} \times M_{(tm,n)}^k \to M_{(tm,n)}$. Function $subst_{tm,n,k}(B, a, a_1, \ldots a_k)$ is defined whenever \[ Hom_{n,k}(B, ctx_{tm,k}(a), a_1, \ldots a_k) \] holds. In this case we let $subst_{tm,n,k}(B, a, a_1, \ldots a_k) = a[\alpha(a_1, \ldots a_k)]$ where $a[f] = s_{a \circ f}$. Axioms \eqref{ax:def-subst} and \eqref{ax:type-subst-tm} hold by definition. Axiom~\eqref{ax:subst-var} follows from the fact that $id_{ctx_{tm,n}(a)}^*(a) = a$. To prove axiom~\eqref{ax:var-subst} note that $p_A \circ \alpha(a_1, \ldots a_{k+1}) = \alpha(a_1, \ldots a_k)$ by definition of $\alpha$. Hence $p^i(A) \circ \alpha(a_1, \ldots a_k) = \alpha(a_1, \ldots a_{k-i})$. Also note that $s_{\alpha(a_1, \ldots a_k)} = a_k$. Now the axiom follows from the following equations: \begin{align*} subst_{tm,n,k}(B, v_{k,i}(A), a_1, \ldots a_k) & = \\ s_{v_{k,i}(A) \circ \alpha(a_1, \ldots a_k)} & = \\ s_{q(p^{i+1}(A), ft^i_{n-i}(A)) \circ v_{k,i}(A) \circ \alpha(a_1, \ldots a_k)} & = \\ s_{p^i(A) \circ \alpha(a_1, \ldots a_k)} & = \\ s_{\alpha(a_1, \ldots a_{k-i})} & = \\ a_{k-i} & . \end{align*} Now, we prove that $\alpha$ preserves compositions. To do this we need to show that $\alpha(a_1, \ldots a_k) \circ f = \alpha(a_1[f], \ldots a_k[f])$. We do this by induction on $k$. For $k = 0$ it is trivial and for $k > 0$ we have the following sequence of equations: \begin{align*} \alpha(a_1, \ldots a_k) \circ f & = \\ q(\alpha(a_1, \ldots a_{k-1}), A) \circ a_k \circ f & = \\ q(\alpha(a_1, \ldots a_{k-1}), A) \circ q(f, B[\alpha(a_1, \ldots a_k)]) \circ a_k[f] & = \\ q(\alpha(a_1, \ldots a_{k-1}) \circ f, A) \circ a_k[f] & = \\ q(\alpha(a_1[f], \ldots a_{k-1}[f]), A) \circ a_k[f] & = \\ \alpha(a_1[f], \ldots a_k[f]) & . \end{align*} Now, axiom \eqref{ax:subst-subst} follows from the facts that $\alpha$ preserves compositions and $(f \circ g)^*(A) = f^*(g^*(A))$. This completes the construction of $\mathbb{S}$ model $M$ from a contextual category $C$. To finish the proof we need to show that $F(M)$ is isomorphic to $C$. The isomorphism is given by bijection $\alpha$. We already saw that $\alpha$ preserves the structure of contextual categories. Thus $\alpha$ is a morphism of contextual categories, and it is easy to see that $\alpha^{-1}$ also preserves the structure. Hence $\alpha$ is isomorphism and $F$ is an equivalence. \end{proof} Let $u : \mathbb{S} \to \mathbb{T}$ be an algebraic dependent type theory with substitution. Then it follows from \rprop{func-mod} and \rprop{T1-CCat} that models of $\mathbb{T}$ are contextual categories with additional structure, where $u^* : \Mod{\mathbb{T}} \to \Mod{\mathbb{S}}$ is the forgetful functor. \section{Algebraic dependent type theories} In this section we consider partial Horn theories with additional structure which we call \emph{stable}. We also define the category $\cat{TT}$ of algebraic dependent type theories and give a few examples of such theories. \subsection{Stable theories} First, let us define prestable theories. For every set $\mathcal{S}_0$, we define the corresponding set $\mathcal{S}$ of sorts as $\mathcal{S}_0 \times \mathbb{N}$. We call elements of $\mathcal{S}_0$ \emph{basic sorts}. Suppose that $\mathcal{S}_0$ contains a distinguished sort $ctx$. Let $\mathbb{T}_{\mathcal{S}_0}$ be a theory with the following function symbols: \begin{align*} * &: (ctx,0) \\ ft_n & : (ctx,n+1) \to (ctx,n) \\ ctx_{p,n} & : (p,n) \to (ctx,n) \text{ for every } p \in \mathcal{S}_0 \end{align*} and the following axioms: \begin{align*} & \sststile{}{} *\!\downarrow \\ & \sststile{}{x} x = * \\ & \sststile{}{x} ctx_{ctx,n}(x) = x \end{align*} To define prestable theories, we need to introduce a few auxiliary constructions. First, we define a function $L : \mathcal{C} \to \mathcal{C}$ as follows: \begin{align*} L(ctx,n) & = L(ctx,n+1) \\ L(tm,n) & = L(tm,n+1) \end{align*} For every set $\mathcal{F}$ of function symbols, we define another set $L(\mathcal{F})$ which consists of symbols $L(\sigma)$ for every $\sigma \in \mathcal{F}$. If $\sigma : s_1 \times \ldots \times s_k \to s$, then $L(\sigma) : (ctx,1) \times L(s_1) \times \ldots \times L(s_k) \to L(s)$. For every set of variables $V$ we define a set $L(V)$ which contains a variable $x$ of sort $L(s)$ for every variable $x$ of sort $s$ in $V$. For every terms $\Gamma \in Term_{L(\mathcal{F})}(L(V))_{(ctx,1)}$ and $t \in Term_{\mathcal{F}}(V)_{(p,n)}$, we define a restricted term $L(\Gamma,t) \in RTerm_{L(\mathcal{F})}(L(V))_{(p,n+1)}$ as follows: \begin{align*} L(\Gamma, x) & = x|_{L(ctx_{p,n})(\Gamma, x) \downarrow} \\ L(\Gamma, \sigma(t_1, \ldots t_k)) & = L(\sigma)(\Gamma, L(\Gamma, t_1), \ldots L(\Gamma, t_k)) \end{align*} For every set $\mathcal{P}$ of relation symbols, we define set $L(\mathcal{P})$ which consists of symbols $L(R) : (ctx,1) \times L(s_1) \times \ldots \times L(s_k)$ for every $R \in \mathcal{P}$, $R : s_1 \times \ldots \times s_k$. For every formula $\varphi \in Form_\mathcal{P}(V)$ and term $\Gamma \in Term_{L(\mathcal{F})}(L(V))_{(ctx,1)}$, we define a formula $L(\Gamma, \varphi) \in Form_{L(\mathcal{P})}(L(V))$ as follows: \begin{align*} L(\Gamma, t_1 = t_2) & = (L(\Gamma, t_1) = L(\Gamma, t_2)) \\ L(\Gamma, R(t_1, \ldots t_k)) & = L(R)(\Gamma, L(\Gamma, t_1), \ldots L(\Gamma, t_k)) \end{align*} Now, let us define a functor $L : \mathbb{T}_{\mathcal{S}_0}/\cat{Th}_\mathcal{S} \to \mathbb{T}_{\mathcal{S}_0}/\cat{Th}_\mathcal{S}$. Let $L((\mathcal{S}, \mathcal{F}, \mathcal{P}), \mathcal{A}) = ((\mathcal{S}, L(\mathcal{F}) \cup \mathcal{F}_{\mathcal{S}_0}, L(\mathcal{P})), \mathcal{A}' \cup \mathcal{A}_{\mathcal{S}_0})$, where $\mathcal{F}_{\mathcal{S}_0}$ and $\mathcal{A}_{\mathcal{S}_0}$ are the sets of function symbols and axioms of $\mathbb{T}_{\mathcal{S}_0}$, and $\mathcal{A}'$ consists of the following axioms: \[ ft^n(ctx_{p,n+1}(x)) = \Gamma \sststile{}{\Gamma,x} ctx_{p,n+1}(x) = L(ctx_{p,n})(\Gamma,x) \] for every $p \in \mathcal{S}_0$, \begin{align*} L(\sigma)(\Gamma, x_1, \ldots x_k)\!\downarrow & \sststile{}{\Gamma, x_1, \ldots x_k} ft^n(ctx_{p,n}(L(\sigma)(\Gamma, x_1, \ldots x_k))) = \Gamma \\ L(\sigma)(\Gamma, x_1, \ldots x_k)\!\downarrow & \sststile{}{\Gamma, x_1, \ldots x_k} ft^{n_i}(ctx_{p_i,n_i}(x_i)) = \Gamma \end{align*} for every $\sigma \in \mathcal{F}$, $\sigma : (p_1,n_1) \times \ldots \times (p_k,n_k) \to (p,n)$ and every $1 \leq i \leq k$, \[ L(R)(\Gamma, x_1, \ldots x_k) \sststile{}{\Gamma, x_1, \ldots x_k} ft^{n_i}(ctx_{p_i,n_i}(x_1)) = \Gamma \] for every $R \in \mathcal{P}$, $R : (p_1,n_1) \times \ldots \times (p_k,n_k)$ and every $1 \leq i \leq k$. If $f : \mathbb{T} \to \mathbb{T}'$, then let $L(f) : L(\mathbb{T}) \to L(\mathbb{T}')$ be defined as follows: \begin{align*} L(f)(L(\sigma)(\Gamma, x_1, \ldots x_k)) & = L(\Gamma, f(\sigma(x_1, \ldots x_k))) \\ L(f)(L(R)(\Gamma, x_1, \ldots x_k)) & = L(\Gamma, f(R(x_1, \ldots x_k))) \end{align*} It is easy to see that this defines a morphism of theories and that $L$ preserves identity morphisms and compositions. \begin{defn} A \emph{prestable (essentially) algebraic theory} is an algebra for functor $L$, that is a pair $(\mathbb{T},\alpha)$, where $\mathbb{T}$ is a theory under $\mathbb{T}_{\mathcal{S}_0}$ and $\alpha : L(\mathbb{T}) \to \mathbb{T}$. The category $\cat{PSt}_{\mathcal{S}_0}$ of prestable theories is the category of algebras for $L$. \end{defn} \begin{defn} A prestable theory is called \emph{stable} if the following theorem holds for every axiom $\varphi \sststile{}{x_1 : (p_1,n_1), \ldots x_k : (p_k,n_k)} \psi$ in $\mathcal{A}$: \[ \alpha L(\Gamma,\varphi) \land \bigwedge_{1 \leq i \leq k} ft^{n_i}(ctx_{p_i,n_i}(x_i)) = \Gamma \sststile{}{\Gamma, x_1, \ldots x_k} \alpha L(\Gamma,\psi). \] The category of stable theories is denoted by $\cat{St}_{\mathcal{S}_0}$. Let $c$ be the prestable theory generated by a single constant $c : (ctx,1)$. Then a prestable theory under $c$ is called \emph{$c$-stable} if the following sequents are derivable: \begin{align*} \alpha L(\sigma)(\Gamma, x_1, \ldots x_k)\!\downarrow & \sststile{}{\Gamma, x_1, \ldots x_k} \Gamma = c \\ \alpha L(R)(\Gamma, x_1, \ldots x_k) & \sststile{}{\Gamma, x_1, \ldots x_k} \Gamma = c \\ \alpha L(c,\varphi) \land \bigwedge_{1 \leq i \leq k} ft^{n_i}(ctx_{p_i,n_i}(x_i)) = c & \sststile{}{x_1, \ldots x_k} \alpha L(c,\psi) \end{align*} for every function symbol $\sigma$, every predicate symbol $R$, and every axiom $\varphi \sststile{}{x_1, \ldots x_k} \psi$. The category of $c$-stable theories is denoted by $\cSt_{\mathcal{S}_0}$. \end{defn} The theory of substitutions is stable. Indeed, we can define maps $\alpha : L(\mathbb{S}) \to \mathbb{S}$ as follows: \begin{align*} \alpha(L(ty_n)(\Gamma,a)) & = ty_{n+1}(a)|_{ft^n(ctx_{tm,n+1}(a)) = \Gamma} \\ \alpha(L(v_{n,i})(\Gamma,\Delta)) & = v_{n+1,i}(\Delta)|_{ft^n(\Delta) = \Gamma} \end{align*} and $\alpha(L(subst_{p,n,k})(\Gamma, \Delta, B, a_1, \ldots a_k))$ is defined as \[ subst_{p,n+1,k+1}(\Delta, B, v_{n+1,n}(\Delta), a_1, \ldots a_k)|_{ft^n(\Delta) = \Gamma} \] The construction of colimits in \rprop{th-cocomplete} implies that $L$ preserves colimits. It follows that $\cat{PSt}_{\mathcal{S}_0}$ is cocomplete. The categories of stable and $c$-stable theories are closed in $\cat{PSt}_{\mathcal{S}_0}$ under colimits. \subsection{Contextual theories} The definition of prestable theories has a disadvantage that terms contain a lot of redundant information. For example, when we describe a term we need to repeat the context in which it is defined several times. The following notion allows us to omit this redundant information as we discuss below. \begin{defn} Let $\mathbb{T}_b$ be a prestable theory. A \emph{contextual theory under $\mathbb{T}_b$} is a prestable theory $\mathbb{T}$ such that the following conditions hold: \begin{enumerate} \item There exists a set of function symbols $\mathcal{F}_0$ (which we call \emph{basic function symbols}) such that the set of function symbols of $\mathbb{T}$ consists of function symbols of $\mathbb{T}_b$ together with symbols \[ \sigma_m : (ctx,m) \times (p_1,n_1+m) \times \ldots \times (p_k,n_k+m) \to (p,n+m) \] for every $\sigma : (p_1,n_1) \times \ldots \times (p_k,n_k) \to (p,n) \in \mathcal{F}_0$ and $m \in \mathbb{N}$. Moreover, if $\sigma : s_1 \times \ldots \times s_k \to s \in \mathcal{F}_0$, then $s \neq (ctx,0)$. \item There exists a set of predicate symbols $\mathcal{P}_0$ (which we call \emph{basic predicate symbols}) such that the set of predicate symbols of $\mathbb{T}$ consists of predicate symbols of $\mathbb{T}_b$ together with symbols \[ R_m : (ctx,m) \times (p_1,n_1+m) \times \ldots \times (p_k,n_k+m) \] for every $R : (p_1,n_1) \times \ldots \times (p_k,n_k) \in \mathcal{P}_0$ and $m \in \mathbb{N}$. \item Every axiom of $\mathbb{T}_b$ is an axiom of $\mathbb{T}$. \item $\alpha_\mathbb{T} : L(\mathbb{T}) \to \mathbb{T}$ is defined as follows: \begin{align*} \alpha_\mathbb{T}(L(\sigma_m)(\Gamma, \Delta, x_1, \ldots x_k)) & = \sigma_{m+1}(\Delta, x_1, \ldots x_k)|_{ctx^{n+m}(\Delta) = \Gamma} \\ \alpha_\mathbb{T}(L(R_m)(\Gamma, \Delta, x_1, \ldots x_k)) & = R_{m+1}(\Delta, x_1, \ldots x_k) \land ctx^{n+m}(\Delta) = \Gamma \end{align*} and for every symbol of $\mathbb{T}_b$, it is defined in the same way as in $\mathbb{T}_b$. \end{enumerate} \end{defn} Since we can always infer the index $m$ for every function symbol $\sigma_m$ if we know its sort, we usually omit this index. To specify the omitted argument, we use the following syntax: $\Gamma \vdash t$, which stands for $\sigma(\Gamma, t_1, \ldots t_k)$ if $t = \sigma(t_1, \ldots t_k)$ and for $x|_{ctx(x) = \Gamma}$ if $t = x$. Of course, if some arguments are omitted in $\Gamma$, then we need to know its context too in order to infer them. Thus, we may write $A_1, \ldots A_n \vdash t$ which stands for $(\ldots ((* \vdash A_1) \vdash A_2) \ldots \vdash A_n) \vdash t$. We also use this notation in formulas: $\Gamma \vdash t \equiv t'$ stands for $(\Gamma \vdash t) = (\Gamma \vdash t')$ and $\Gamma \vdash R(t_1, \ldots t_k)$ stands for $R(\Gamma, (\Gamma \vdash t_1), \ldots (\Gamma \vdash t_k))$. Also, we use the standard notation: $\Gamma \vdash A\ type$ stands for $\Gamma \vdash A\!\downarrow$ if $A : (ty,n)$ and $\Gamma \vdash a : A$ stands for $ty(\Gamma \vdash a) = (\Gamma \vdash A)$. Sequents $\varphi_1 \land \ldots \land \varphi_n \sststile{}{V} \psi$ and $\varphi_1 \land \ldots \land \varphi_n \ssststile{}{V} \psi$ are written as \medskip \begin{center} \AxiomC{$\varphi_1$} \AxiomC{$\ldots$} \AxiomC{$\varphi_n$} \TrinaryInfC{$\psi$} \DisplayProof \qquad and \qquad \AxiomC{$\varphi_1$} \AxiomC{$\ldots$} \AxiomC{$\varphi_n$} \doubleLine \RightLabel{,} \TrinaryInfC{$\psi$} \DisplayProof \end{center} respectively. Finally, we use the following syntax: \[ \sigma(A^1_1, \ldots A^1_{n_1}.\ b_1, \ldots A^k_1, \ldots A^k_{n_k}.\ b_k) \] for a term of sort $(p,m+n)$ in a contextual theory, where $\sigma : (p_1,n_1) \times \ldots \times (p_k,n_k) \to (p,n)$, $b_i$ is a term of sort $(p_i,m+n_i)$, and $A^i_j$ is a term of sort $(ty,m+j-1)$. The expression $\Gamma \vdash \sigma(A^1_1, \ldots A^1_{n_1}.\ b_1, \ldots A^k, \ldots A^k_{n_k}.\ b_k)$ stands for \[ \sigma_m(\Gamma, (\Gamma, A^1_1, \ldots A^1_{n_1} \vdash b_1), \ldots (\Gamma, A^k_1, \ldots A^k_{n_k} \vdash b_k)). \] Of course, if some $b_i$ is a variable, then we can omit $A^i_1, \ldots A^i_{n_i}$. We also can omit this context if there is a theorem of the following form: \[ E \vdash \sigma_m(x_1, \ldots x_k)\!\downarrow\ \sststile{}{E, x_1, \ldots x_k} E \vdash ctx(x_i) \equiv \Delta \] for some $\Delta$ such that $x_i \notin FV(\Delta)$. Then $A^i_1, \ldots A^i_{n_i}$ must be equal to $((\Gamma \vdash ft^{n_i-1}(\Delta)), \ldots (\Gamma \vdash \Delta))[\rho]$, where $\rho(E) = \Gamma$ and $\rho(x_j) = (\Gamma, A^j_1, \ldots A^j_{n_j} \vdash b_j)$. The following lemma shows that we can always replace a prestable theory with a contextual one. \begin{lem}[stable-con] Let $\mathbb{T}_b$ be a prestable theory. Every prestable theory under $\mathbb{T}_b$ is isomorphic to a contextual theory under $\mathbb{T}_b$. \end{lem} \begin{proof} Let $\mathbb{T}$ be a prestable theory together with a map $f : \mathbb{T}_b \to \mathbb{T}$ with $\mathcal{F}_0$ and $\mathcal{P}_0$ as the sets of function and predicate symbols. First, note that we may assume that for every $\sigma : s_1 \times \ldots \times s_k \to s$ in $\mathcal{F}_0$, $s \neq (ctx,0)$. Indeed, we can always replace such a function symbol with a predicate symbol $R_\sigma : s_1 \times \ldots \times s_k$. Second, note that for every term $t \in Term_{\mathcal{F}_0}(V)_{(p,n)}$ and every $m \in \mathbb{N}$, we can construct the following restricted term: \[ \alpha L(ft^{m-1}(\Gamma), \alpha L(ft^{m-2}(\Gamma), \ldots \alpha L(\Gamma, t))) \] in $RTerm_{\mathbb{T}}(L^m(V) \amalg \{ \Gamma : (ctx,m) \})_{(p,n+m)}$, which we denote by $\Gamma \times t$. Analogously, we can define for every formula $\varphi \in Form_{\mathbb{T}}(V)$ and every $m \in \mathbb{N}$, a formula $\Gamma \times \varphi \in Form_{\mathbb{T}}(L^m(V) \amalg \{ \Gamma : (ctx,m) \})$. Let $\mathbb{T}'$ be a contextual theory under $\mathbb{T}_b$ defined from the sets $\mathcal{F}_0$ and $\mathcal{P}_0$. Note that every term (formula, sequent) of $\mathbb{T}$ is naturally a term (formula, sequent) of $\mathbb{T}'$. Axioms of $\mathbb{T}'$ is the axioms of $\mathbb{T}$ together with the following axioms: \begin{align*} & \sststile{}{x_1, \ldots x_k} \tau_0(*, x_1, \ldots x_k) \cong f(\tau(x_1, \ldots x_k)) \\ P_0(*, x_1, \ldots x_k) & \ssststile{}{x_1, \ldots x_k} f(P(x_1, \ldots x_k)) \\ & \sststile{}{\Gamma, x_1, \ldots x_k} \Gamma \times \sigma_0(*, x_1, \ldots x_k) \cong \sigma_m(\Gamma, x_1, \ldots x_k) \\ \Gamma \times R_0(*, x_1, \ldots x_k) & \ssststile{}{\Gamma, x_1, \ldots x_k} R_m(\Gamma, x_1, \ldots x_k) \end{align*} for every function symbol $\tau$ and predicate symbol $P$ of $\mathbb{T}_b$ and every $\sigma \in \mathcal{F}_0$ and $R \in \mathcal{P}_0$. There is an obvious map $\mathbb{T} \to \mathbb{T}'$ and we can define a map $\mathbb{T}' \to \mathbb{T}$ which maps $\sigma_m(\Gamma, x_1, \ldots x_k)$ to $\Gamma \times \sigma_0(*, x_1, \ldots x_k)$, $R_m(\Gamma, x_1, \ldots x_k)$ to $\Gamma \times R_0(*, x_1, \ldots x_k)$, $\tau_0(\Gamma, x_1, \ldots x_k)$ to $f(\tau(x_1, \ldots x_k))|_{\Gamma\downarrow}$, and $P_0(\Gamma, x_1, \ldots x_k)$ to $f(P(x_1, \ldots x_k))|_{\Gamma\downarrow}$. Axioms guarantee that these maps are inverses of each other. \end{proof} Contextual theories constructed in the previous lemma are not convenient in practice, but usually theories are defined in a contextual form. It is easy to define such theory: we just need to specify sets $\mathcal{F}_0$ and $\mathcal{P}_0$ and the set of axioms. It is also easy to define a morphism of contextual theories since we only need to define it on symbols from $\mathcal{F}_0$ and $\mathcal{P}_0$. Then it uniquely extends to a morphism of prestable theories. \subsection{Algebraic dependent type theories} Algebraic dependent type theories are prestable theories under $\mathbb{S}$ in which substitution commutes with all function symbols. To define such theories, we need to define weakening first. For every $p \in \{ty,tm\}$, the operations of weakening $wk^{m,l}_{p,n} : (ctx,n+m) \times (p,n+l) \to (p,n+m+l)$ are defined as follows: \begin{align*} wk^{m,0}_{p,n}(\Gamma,a) & = subst_{p,n+m,n}(\Gamma, a, v_{n+m-1}, \ldots v_m) \\ wk^{m,l+1}_{p,n}(\Gamma,a) & = subst_{p,n+m+l+1,n+l+1}(\Gamma', a, v_{n+m+l}, \ldots v_{m+l+1}, v_l, \ldots v_0), \end{align*} where $\Gamma' = wk^{m,l}_{ty,n}(\Gamma,ctx(a))$. We also define $wk^{m,l}_{ctx,n} : (ctx,n+m) \times (ctx,n+l) \to (ctx,n+m+l)$ as follows: \begin{align*} wk^{m,0}_{ctx,n}(\Gamma,a) & = \Gamma \\ wk^{m,l+1}_{ctx,n}(\Gamma,a) & = wk^{m,l}_{ty,n}(\Gamma,a). \end{align*} Now, we need to introduce a new derived operation. For every $m,n,k \in \mathbb{N}$ and $p \in \{ ctx, ty, tm \}$, we define the following function: \[ subst^m_{p,n,k} : (ctx,n) \times (p,k+m) \times (tm,n)^k \to (p,n+m). \] First, let $subst^0_{ctx,n,k}(B, A, a_1, \ldots a_k) = B$ and $subst^{m+1}_{ctx,n,k} = subst^m_{ty,n,k}$. If $p \in \{ ty, tm \}$, then let $subst^m_{p,n,k}(B, a, a_1, \ldots a_k)$ be equal to \[ subst_{p,n+m,k+m}(B', a, wk^{m,0}_{tm,n}(a_1), \ldots wk^{m,0}_{tm,n}(a_k), v_{m-1}, \ldots v_0), \] where $B' = subst^m_{ctx,n,k}(B, ctx_{k+m}(a), a_1, \ldots a_k)$. \begin{defn} A prestable theory under $\mathbb{S}$ is an \emph{algebraic dependent type theory} if, for every $\sigma \in \mathcal{F}$, $\sigma : (p_1,n_1) \times \ldots \times (p_k,n_k) \to (p,n)$ and every $R \in \mathcal{P}$, $R : (p_1,n_1) \times \ldots \times (p_k,n_k)$, the following sequents are derivable in it: \medskip \begin{center} \AxiomC{$\Delta \times \sigma(b_1, \ldots b_k) \downarrow$} \AxiomC{$\bigwedge_{1 \leq i \leq m} ty(a_i) = subst_{ty,l,i-1}(\Gamma, ft^{m-i}(\Delta), a_1, \ldots a_{i-1})$} \BinaryInfC{$subst^n_{p,l,m}(\Gamma, \Delta \times \sigma(b_1, \ldots b_k), a_1, \ldots a_m) = \Gamma \times \sigma(b_1', \ldots b_k')$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$\Delta \times R(b_1, \ldots b_k)$} \AxiomC{$\bigwedge_{1 \leq i \leq m} ty(a_i) = subst_{ty,l,i-1}(\Gamma, ft^{m-i}(\Delta), a_1, \ldots a_{i-1})$} \BinaryInfC{$\Gamma \times R(b_1', \ldots b_k')$} \DisplayProof \end{center} \medskip where $b_i' = subst^{n_i}_{p_i,l,m}(\Gamma, b_i, a_1, \ldots a_m)$. The category of algebraic dependent type theories will be denoted by $\cat{TT}$. \end{defn} The construction of colimits in \rprop{th-cocomplete} implies that $\cat{TT}$ is closed under colimits in $\mathbb{S}/\cat{PSt}_\mathcal{C}$. The inclusion functor $\cat{TT} \to \mathbb{S}/\cat{PSt}_\mathcal{C}$ has a left adjoint $\mathbb{S}/\cat{PSt}_\mathcal{C} \to \cat{TT}$, which simply adds the required axioms. We can prove a stronger version of \rlem{stable-con} for algebraic dependent type theories: \begin{lem}[adtt-con] Every algebraic dependent type theory is isomorphic to a contextual theory in which every function symbol in $\mathcal{F}_0$ has a signature of the form \[ \sigma : s_1 \times \ldots \times s_k \to (p,0), \] where $p \in \{ ty,tm \}$. \end{lem} \begin{proof} Let $\mathbb{T}$ be an algebraic dependent type theory. By \rlem{stable-con}, we may assume that $\mathbb{T}$ is contextual. Then we define theory $\mathbb{T}'$ which has the same predicate symbols as $\mathbb{T}$. For every $\sigma : (p_1,n_1) \times \ldots \times (p_k,n_k) \to (p,n)$ in $\mathcal{F}_0$, we add the following function symbol to $\mathbb{T}'$: \[ \sigma' : (p_1,n_1) \times \ldots \times (p_k,n_k) \times (tm,0)^n \to (p,0). \] Then we define $f(\sigma_0(\Gamma, x_1, \ldots x_k))$ as \[ \sigma'_n(\Gamma, wk^{n,n_1}_{p_1,0}(\Gamma, x_1), \ldots wk^{n,n_k}_{p_k,0}(\Gamma, x_k), v_{n-1}, \ldots v_0). \] For every predicate symbol $R$, we define $f(R(x_1, \ldots x_k))$ as $R(x_1, \ldots x_k)$. For every axiom $\varphi \sststile{}{V} \psi$ of $\mathbb{T}$, we add axiom $f(\varphi) \sststile{}{V} f(\psi)$ to $\mathbb{T}'$. Then $f$ is a morphism of theories $f : \mathbb{T} \to \mathbb{T}'$. Moreover, there is a morphism $g : \mathbb{T}' \to \mathbb{T}$, which is defined as follows: \begin{align*} g(\sigma'_0(\Gamma, x_1, \ldots x_k, y_1, \ldots y_n)) & = subst^n_{p,0,0}(\Gamma, \sigma_0(\Gamma, x_1, \ldots x_k), y_1, \ldots y_n) \\ g(R(x_1, \ldots x_k)) & = R(x_1, \ldots x_k) \end{align*} The axioms of algebraic dependent type theories imply that $f$ and $g$ are inverses of each other. \end{proof} When we say that an algebraic dependent type theory is contextual (or presented in a contextual form), then we assume that it has a form as described in the previous lemma. If an algebraic dependent type theory is presented in a contextual form, then every term is equivalent to a term in which substitution operations are applied only to variables. We can as usual omit the first argument to $subst_{p,n,k}$. Also, if $X : (p,n+k)$ and $a_1, \ldots a_k : (tm,n)$, then we write $X[a_1, \ldots a_k]$ for \[ subst_{p,n,k}(X, v_{n-1}, \ldots v_0, a_1, \ldots a_k). \] One last problem is that we often need to apply weakening operations to variables. It is not convenient to do this explicitly, so we introduce named variables in our terms. Let $Var$ be some fixed countable set of variables. To distinguish these variable from the ones that we used before we will call the latter \emph{metavariables}. First, we assume that every metavariable $X$ of sort $(p,n)$ is equipped with a sequence of variables of length $n$, which we call the context of this metavariable. Usually, we do not specify the context of a metavariable explicitly since it can be inferred from formulas and terms in which this metavariable appears. Second, every binding should be annotated with a variable. In particular, instead of $A_1, \ldots A_n \vdash b$ we should write $x_1 : A_1, \ldots x_n : A_n \vdash b$ and instead of $\sigma(A^1_1, \ldots A^1_{n_1}.\ b_1, \ldots A^k, \ldots A^k_{n_k}.\ b_k)$ we should write \[ \sigma((x_1 : A^1_1), \ldots (x_{n_1} : A^1_{n_1}).\ b_1, \ldots ((x_1 : A^k_1), \ldots (x_{n_k} : A^k_{n_k}).\ b_k) \] Now, we may use variables instead of de Bruijn indices. If a variable $x_i$ appears in a context $x_1, \ldots x_n$, then it is decoded into expression $v_{n-i}$. Every metavariable should appear in a context where all variables from its context are available. Then a metavariable $X$ with context $x_1, \ldots x_n$ should be replaced with expression $subst(X, x_1, \ldots x_n)$. We may also write $X[x_{i_1} \mapsto a_{i_1}, \ldots x_{i_k} \mapsto a_{i_k}]$, which is replaced with expression $subst(X, a_1, \ldots a_n)$, where $a_j = x_j$ if $j \notin \{ i_1, \ldots i_k \}$. Finally, we may write $ft^i(X)$, which works like a metavariable with context $x_1, \ldots x_{n-i}$. \section{Examples} Now, let us describe a few examples of algebraic dependent type theories with substitution. If we take their stabilization, then we get theories corresponding to usual constructions of the type theory. Every theory is presented in the contextual form. Also, to simplify the notation, we use the following agreement. For every sequent of the form $\varphi \sststile{}{} \Gamma \vdash A\ type$, there is also sequent $\Gamma \vdash A\ type \sststile{}{} \varphi$ and, for every sequent of the form $\varphi \sststile{}{} \Gamma \vdash a : A$, there is also sequent $\Gamma \vdash a\!\downarrow\ \sststile{}{} \varphi$. \begin{example} The theory of unit types with eta rules has function symbols $\top : (ty,0)$ and $unit : (tm,0)$ and the following axioms: \medskip \begin{center} \AxiomC{} \UnaryInfC{$\vdash \top\ type$} \DisplayProof \quad \AxiomC{} \UnaryInfC{$\vdash unit : \top$} \DisplayProof \quad \AxiomC{$\vdash t : \top$} \UnaryInfC{$\vdash t \equiv unit$} \DisplayProof \end{center} \end{example} \begin{example} The theory of unit types without eta rules has function symbols $\top : (ty,0)$, $unit : (tm,0)$ and $\top\text{-}elim : (ty,1) \times (tm,0) \times (tm,0) \to (tm,0)$. The axioms for $\top$ and $unit$ are the same, and the axioms for $\top\text{-}elim$ are \medskip \begin{center} \AxiomC{$x : \top \vdash D\ type$} \AxiomC{$\vdash d : D[x \mapsto unit]$} \AxiomC{$\vdash t : \top$} \TrinaryInfC{$\vdash \top\text{-}elim(x.\,D, d, t) : D[x \mapsto t]$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$x : \top \vdash D\ type$} \AxiomC{$\vdash d : D[x \mapsto unit]$} \BinaryInfC{$\vdash \top\text{-}elim(x.\,D, d, unit) \equiv d$} \DisplayProof \end{center} \end{example} \begin{example}[sigma-eta] The theory of $\Sigma$ types with eta rules has function symbols \begin{align*} \Sigma & : (ty,1) \to (ty,0) \\ pair & : (ty,1) \times (tm,0) \times (tm,0) \to (tm,0) \\ proj_1 & : (ty,1) \times (tm,0) \to (tm,0) \\ proj_2 & : (ty,1) \times (tm,0) \to (tm,0) \end{align*} and the following axioms: \medskip \begin{center} \AxiomC{} \UnaryInfC{$\vdash \Sigma(x.\,B)\ type$} \DisplayProof \quad \AxiomC{$\vdash b : B[x \mapsto a]$} \UnaryInfC{$\vdash pair(x.\,B, a, b) : \Sigma(x.\,B)$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$\vdash p : \Sigma(x.\,B)$} \UnaryInfC{$\vdash proj_1(x.\,B, p) : ft(B)$} \DisplayProof \quad \AxiomC{$\vdash p : \Sigma(x.\,B)$} \UnaryInfC{$\vdash proj_2(x.\,B, p) : B[x \mapsto proj_1(x.\,B, p)]$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$\vdash b : B[x \mapsto a]$} \UnaryInfC{$\vdash proj_1(x.\,B, pair(x.\,B, a, b)) \equiv a$} \DisplayProof \qquad \AxiomC{$\vdash b : B[x \mapsto a]$} \UnaryInfC{$\vdash proj_2(x.\,B, pair(x.\,B, a, b)) \equiv b$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$\vdash p : \Sigma(x.\,B)$} \UnaryInfC{$\vdash pair(x.\,B, proj_1(x.\,B, p), proj_2(x.\,B, p)) \equiv p$} \DisplayProof \end{center} \end{example} \begin{example}[sigma-no-eta] The theory of $\Sigma$ types without eta rules has the following function symbols: \begin{align*} \Sigma & : (ty,1) \to (ty,0) \\ pair & : (ty,1) \times (tm,0) \times (tm,0) \to (tm,0) \\ \Sigma\text{-}elim & : (ty,1) \times (tm,2) \times (tm,0) \to (tm,0) \end{align*} The axioms for $\Sigma$ and $pair$ are the same, and the axioms for $\Sigma\text{-}elim$ are \medskip \begin{center} \AxiomC{$z : \Sigma(x.\,B) \vdash D\ type$} \AxiomC{$x : ft(B), y : B \vdash d : D'$} \AxiomC{$\vdash p : \Sigma(x.\,B)$} \TrinaryInfC{$\vdash \Sigma\text{-}elim(z.\,D, x y.\,d, p) : D[z \mapsto p]$} \DisplayProof \end{center} \medskip \begin{center} \def1pt{1pt} \AxiomC{$z : \Sigma(x.\,B) \vdash D\ type$} \AxiomC{$x : ft(B), y : B \vdash d : D'$} \AxiomC{$\vdash b : B[x \mapsto a]$} \TrinaryInfC{$\vdash \Sigma\text{-}elim(z.\,D, x y.\,d, pair(x.\,B, a, b)) \equiv d[x \mapsto a, y \mapsto b]$} \DisplayProof \end{center} \end{example} where $D' = D[z \mapsto pair(x.\,B, x, y)]$. \begin{example}[pi-eta] The theory of $\Pi$ types with eta rules has function symbols \begin{align*} \Pi & : (ty,1) \to (ty,0) \\ \lambda & : (tm,1) \to (tm,0) \\ app & : (ty,1) \times (tm,0) \times (tm,0) \to (tm,0) \end{align*} and the following axioms: \medskip \begin{center} \AxiomC{} \UnaryInfC{$\vdash \Pi(x.\,B)\ type$} \DisplayProof \quad \AxiomC{} \UnaryInfC{$\vdash \lambda(x.\,b) : \Pi(x.\,ty(b))$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$\vdash f : \Pi(x.\,B)$} \AxiomC{$\vdash a : ft(B)$} \BinaryInfC{$\vdash app(x.\,B, f, a) : B[x \mapsto a]$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$\vdash a : ft(B)$} \UnaryInfC{$\vdash app(x.\,B, \lambda(x.\,b), a) \equiv b[x \mapsto a]$} \DisplayProof \quad \AxiomC{$\vdash f : \Pi(x.\,B)$} \UnaryInfC{$\vdash \lambda(y.\,app(x.\,B, f, y)) \equiv b$} \DisplayProof \end{center} \end{example} \begin{example}[Id] The theory of identity types has function symbols \begin{align*} Id & : (tm,0) \times (tm,0) \to (ty,0) \\ refl & : (tm,0) \to (tm,0) \\ J & : (ty,3) \times (tm,1) \times (tm,0) \times (tm,0) \times (tm,0) \to (tm,0) \end{align*} and the following inference rules: \medskip \begin{center} \AxiomC{$\vdash ty(a) \equiv ty(a')$} \UnaryInfC{$\vdash Id(a, a')\ type$} \DisplayProof \quad \AxiomC{} \UnaryInfC{$\vdash refl(a) : Id(a, a)$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$x : A, y : A, z : Id(x, y) \vdash D\ type$} \AxiomC{$x : A \vdash d : D'$} \AxiomC{$\vdash p : Id(a, a')$} \TrinaryInfC{$\vdash J(x y z.\,D, x.\,d, a, a', p) : D[x \mapsto a, y \mapsto a', z \mapsto p]$} \DisplayProof \end{center} \medskip \begin{center} \AxiomC{$x : A, y : A, z : Id(x, y) \vdash D\ type$} \AxiomC{$x : A \vdash d : D'$} \BinaryInfC{$\vdash J(x y z.\,D, x.\,d, a, a, refl(a)) \equiv d[x \mapsto a]$} \DisplayProof \end{center} \medskip where $A = ty(a)$ and $D' = D[y \mapsto x, z \mapsto refl(x)]$. \end{example} \begin{example} We define an endofunctor $U$ on the category of algebraic dependent type theories. For every such theory $\mathbb{T}$, theory $U(\mathbb{T})$ has the same symbols as $\mathbb{T}$, but it also has a universe which is closed under all function symbols of $\mathbb{T}$. Let $\mathbb{T}$ be an algebraic dependent type theory in a contextual form. Then $U(\mathbb{T})$ has the same predicate symbols as $\mathbb{T}$ and the following function symbols: \begin{align*} U & : (ty,0) \\ El & : (tm,0) \to (ty,0) \\ \sigma & : s_1 \times \ldots \times s_k \to (p,0) \\ \sigma^U & : U(s_1) \times s_1 \times \ldots \times U(s_k) \times s_k \to (tm,0) \end{align*} for every function symbol $\sigma : s_1 \times \ldots \times s_k \to (p,0)$ of $\mathbb{T}$, where $U(p,n_i) = (tm,0) \times \ldots \times (tm,n_i)$. Theory $U(\mathbb{T})$ has the following axioms: \medskip \begin{center} \AxiomC{} \UnaryInfC{$\vdash U\ type$} \DisplayProof \qquad \AxiomC{$\vdash a : U$} \doubleLine \UnaryInfC{$\vdash El(a)\ type$} \DisplayProof \end{center} \medskip For every function symbol $\sigma : (p_1,n_1) \times \ldots \times (p_k,n_k) \to p$ of $\mathbb{T}$ and every $1 \leq i \leq k$, we add the following axioms to $U(\mathbb{T})$: \[ \vdash \sigma^U(t_1, \ldots, t_m)\!\downarrow\ \sststile{}{t_1, \ldots t_m}\ \vdash \sigma^U(t_1, \ldots t_m) : U \] \[ \sststile{}{V}\ \vdash El(\sigma^U(\ldots, a_1, \ldots a_{n_i+1}, b, \ldots)) \cong e_p(\sigma(\ldots, b|_{\varphi_i}, \ldots)), \] where $a_1, \ldots a_{n_i+1}, b$ are variables that correspond to $i$-th variable in $\sigma$, $e_{ty}(x) = x$, and $e_{tm}(x) = ty(x)$, and $\varphi_i$ equals to \[ \bigwedge_{1 \leq j \leq n_i+1} El(a_j) = ft^{n_i+1-j}(e_{p_i}(b)). \] To define the rest of the axioms of $U(\mathbb{T})$, we need to introduce a few auxiliary functions. For every set of variables $V$, we define a set $U(V)$ as follows: \[ V \amalg \{ x^j : (tm,n-j)\ |\ x : (p,n) \in V, 0 \leq j \leq n \}. \] Now, we define a function $U : Term_\mathbb{T}(V)_{(ty,n)} \to Term_{U(\mathbb{T})}(U(V))_{(tm,n)}$ as follows: \begin{align*} U(ft^j(e_p(x))) & = x^j \\ U(ft^{j+1}(e_p(\sigma_n(\Gamma, t_1, \ldots t_k)))) & = U(ft^j(\Gamma)) \\ U(e_p(\sigma_n(\Gamma, t_1, \ldots t_k))) & = \sigma^U_n(\Gamma, t_1', \ldots t_k'), \end{align*} where $t_i' = U(ft^{n_i}(e_{p_i}(t_i))), \ldots U(e_{p_i}(t_i)), t_i$. We add all axioms of $\mathbb{T}$ to $U(\mathbb{T})$ and, for every axiom $\varphi \sststile{}{V} \psi$ of $\mathbb{T}$, we add the following axiom: \[ U(\varphi) \land \bigwedge_{x \in V} \xi_x \sststile{}{U(V)} U(\psi), \] where $U(R(t_1, \ldots t_k))$ equals to $R(t_1, \ldots t_k)$, $U(t_1 = t_2)$ equals to $U(e_p(t_1)) = U(e_p(t_2)) \land t_1 = t_2$, and $\xi_x$ equals to $(e_p(x) = El(x^0)) \land \bigwedge_{1 \leq j \leq n} ft(El(x^{j-1})) = El(x^j)$. Finally, let us show that $U$ is a functor. Let $f : \mathbb{T} \to \mathbb{T}'$ be a morphism of algebraic dependent type theories. Then $U(f)$ is defined in the obvious way on $U$, $El$ and symbols from $\mathbb{T}$. Define $U(f)(\sigma^U(\ldots, x^{n_i}_i, \ldots x^0_i, x_i \ldots))$ as \[ U(e_p(f(\sigma(x_1, \ldots x_k))))|_{\bigwedge_{1 \leq i \leq k} \xi_{x_k}}. \] It is easy to see that $U(f)$ is a morphism of contextual theories and that $U$ preserves identity morphisms and compositions. Thus, $U$ is a functor. \end{example} \begin{example} There is a natural map $\mathbb{T} \to U(\mathbb{T})$. We define $U^\omega(\mathbb{T})$ as the colimit of the following sequence: \[ \mathbb{T} \to U(\mathbb{T}) \to U^2(\mathbb{T}) \to \ldots \] Then $U^\omega(\mathbb{T})$ is the theory with a hierarchy of universes closed under constructions of $\mathbb{T}$. \end{example} \bibliographystyle{amsplain}
2010.01248
\section{Introduction} Suppose that $\{X_{n}\}_{n\in\mathbb{N}}$ is a free, identically distributed sequence of bounded random variables with zero mean and unit variance. It is known from \cite{vo-sym} that the distributions $\mu_{n}$ of the central limit averages \[ \frac{X_{1}+\cdots+X_{n}}{\sqrt{n}} \] converge weakly to a standard semicircular distribution. Unlike the classical central limit theorem, it was shown in \cite{BV-super-c} that the distribution $\mu_{n}$ is absolutely continuous relative to Lebesgue measure on $\mathbb{R}$ for sufficiently large $n$, and that the densities $d\mu_{n}/dt$ converge uniformly to $\sqrt{4-t^{2}}\text{\ensuremath{\chi_{[-2,2]}}}/2\pi$. This unexpected convergence of densities (along with the fact that the support $[a_{n},b_{n}]$ of $\mu_{n}$ converges to $[-2,2]$ and the density is analytic on $(a_{n},b_{n})$) was called \emph{superconvergence}. The uniform convergence of densities was later proved to hold even when the variables $X_{n}$ are not bounded \cite{JC-local}. The phenomenon of superconvergence was extended to other limit laws and applied to limit theorems for eigenvalue densities of random matrices (see, for instance, \cite{Ka-super,Bao-E-S}). Eventually, the present authors proved in \cite{BWZ-super+} that uniform convergence of densities holds in the general context of limit laws for triangular arrays with free, identically distributed rows. That is, suppose that $k_{1}<k_{2}<\cdots$ is a sequence of positive integers, and for each $n$ the variables $\{X_{n,j}:j=1,\dots,k_{n}\}$ are free and identically distributed. Suppose also that the distribution $\mu_{n}$ of \[ X_{n,1}+\cdots+X_{n,k_{n}} \] converges weakly to some nondegenerate distribution $\mu$. The measure $\mu$ is $\boxplus$-infinitely divisible \cite{BP-hincin} and it is absolutely continuous everywhere, except on a set $D_{\mu}$ that is either empty or a singleton \cite[Proposition 5.1]{BWZ-super+}. Let $V\supset D_{\mu}$ be an arbitrary open set in $\mathbb{R};$ $V$ can be taken to be empty if $D_{\mu}=\varnothing$. Then the result of \cite{BWZ-super+} states that $\mu_{n}$ is absolutely continuous on $\mathbb{R}\backslash V$ and the density of $\mu_{n}$ converges uniformly to the density of $\mu$ as $n\to\infty$. Of course, the results mentioned above can be formulated just as easily in terms of free \emph{additive }convolution of measures. One purpose of the present note is to prove completely analogous results for free \emph{multiplicative} convolution of probability measures on $\mathbb{R_{+}}=[0,+\infty)$ and on the unit circle $\mathbb{T}=\{e^{it}:t\in\mathbb{R}\}$. Our results here supersede those in \cite{AWZ}, since the uniform convergence of densities in \cite{AWZ} was only proved for compact intervals on which the limiting density is nonzero. The multiplicative results are not simply consequences of the additive ones. In fact, each of the three convolutions has its own analytic apparatus, and in each case an important fact is that the respective Voiculescu transform of an infinitely divisible measure has an analytic extension to a certain domain $D$ (that depends on the type of convolution). In each case, the proof is done first for convolutions of infinitely divisible measures. The general case is then obtained via an approximation of infinitesimal measures by infinitely divisible ones, somewhat analogous to the \emph{associated laws} used in the classical treatment of limit laws for sums of independent random variables \cite{GK}. These infinitely divisible laws are obtained from the subordination properties that hold for free convolutions. The methods we develop for superconvergence are useful in other contexts as well. We illustrate this by extending results of Biane \cite{Bi-cusp} concerning the density of a free convolution of the form $\mu\boxplus\gamma$, where $\gamma$ is a semicircular distribution. Such a convolution is always absolutely continuous, its density $h$ is continuous and, in fact, locally analytic wherever it is positive. If $h(t)=0$ for some $t$ and $h(x)\ne0$ in some interval with an endpoint at $t$, it is shown in \cite{Bi-cusp} that $h(x)=O(|x-t|^{1/3})$ for $x$ close to $t$ in that interval. We show that this result holds if $\gamma$ is replaced by an arbitrary nondegenerate $\boxplus$-infinitely divisible distribution. Of course, in this general context, it may happen that $\mu\boxplus\gamma$ has a finite number of atoms and points at which the density is unbounded. The result holds for all other points where the density vanishes. Analogous results are also proved for the two multiplicative free convolutions. The remainder of this paper is organized as follows. Sections \ref{sec:Free-multiplicative-convolution}--\ref{sec:cusps-in R_+} deal with free multiplicative convolution on $\mathbb{R}_{+}$. A section presents preliminaries about this operation, including a new observation analogous to the Schwarz lemma, the next section demonstrates superconvergence, and the last section deals with the possible cusps of the free convolution with an infinitely divisible law. Sections \ref{sec:Free-mutiplicative-convolution on T}--\ref{sec:Cusp-behavior-in T} follow the same program for multiplicative free convolution on the unit circle $\mathbb{T}.$ Finally, Sections \ref{sec:Free-additive-convolution} and \ref{sec:Cusp-behavior-in R} deal with additive free convolution; there is no additive analog of Sections \ref{sec:Superconvergence-in pos line} and \ref{sec:Superconvergence-in T} in the additive case because the corresponding result was already proved in \cite{BWZ-super+}. (The reader may however note that the arguments of \cite{BWZ-super+} can be simplified using the present methods.) Appendix A provides applications of the cusp results to measures in a free convolution semigroup. Finally, Appendix B provides examples that show that the cusp estimates are often sharp. \section{Free multiplicative convolution on $\mathbb{R}_{+}$\label{sec:Free-multiplicative-convolution}} We denote by $\mathcal{P}_{\mathbb{R}_{+}}$ the collection of all probability measures on $\mathbb{R}_{+}.$ The free multiplicative convolution $\boxtimes$ is a binary operation on $\mathcal{P}_{\mathbb{R}_{+}}$. The mechanics of its calculation involves analytic functions defined on the domains $\mathbb{C}\backslash\mathbb{R}_{+}$, \[ \mathbb{H}=\{x+iy:x,y\in\mathbb{R},y>0\}, \] and $-\mathbb{H}$. The first of these is the \emph{moment generating function} $\psi_{\mu}$ of a measure $\mu\in\mathcal{P}_{\mathbb{R}_{+}}$ defined by \[ \psi_{\mu}(z)=\int_{\mathbb{R}_{+}}\frac{tz}{1-tz}\,d\mu(t),\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}. \] This function satisfies $\psi_{\mu}(\mathbb{H})\subset\mathbb{H}$ and $\psi_{\mu}((-\infty,0))\subset(-1,0)$ unless $\mu$ is the unit point mass at $0$, denoted $\delta_{0}$, for which $\psi_{\delta_{0}}=0$. A closely related function is the $\eta$-\emph{transform} of $\mu$ given by \[ \eta_{\mu}(z)=\frac{\psi_{\mu}(z)}{1+\psi_{\mu}(z)},\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}. \] We have $\eta_{\mu}(\mathbb{H})\subset\mathbb{H}$ and $\eta_{\mu}((-\infty,0))\subset(-\infty,0)$ when $\mu\ne\delta_{0}$. These transforms are related to the \emph{Cauchy transform} defined by \[ G_{\mu}(z)=\int_{\mathbb{R}_{+}}\frac{d\mu(t)}{z-t},\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] by the identity \begin{equation} \frac{1}{z}G_{\mu}\left(\frac{1}{z}\right)=\frac{1}{1-\eta_{\mu}(z)},\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}.\label{eq:G versus eta} \end{equation} The Stieltjes inversion formula shows that any of these functions can be used to recover the measure $\mu$. More precisely, the measures \[ -\frac{1}{\pi}(\Im G_{\mu}(x+iy))dx,\quad y>0, \] converge weakly to $\mu$ as $y\downarrow0$. The boundary values \[ G_{\mu}(x)=\lim_{y\downarrow0}G_{\mu}(x+iy),\quad x\in\mathbb{R}_{+}, \] exists almost everywhere (with respect to Lebesgue measure) on $\mathbb{R}_{+}$, and the density $d\mu/dt$ of $\mu$ is equal almost everywhere to $(-1/\pi)\Im G_{\mu}$ (cf. \cite{SteinWeiss}). In terms of the $\eta$-transform, the relation (\ref{eq:G versus eta}) shows that \begin{equation} \frac{1}{x}\frac{d\mu}{dt}\left(\frac{1}{x}\right)=\frac{1}{\pi}\Im\frac{1}{1-\eta_{\mu}(x)}\label{eq:density vs eta} \end{equation} almost everywhere on $\mathbb{R}_{+}$, where $\eta_{\mu}(x)$ is defined almost everywhere as \[ \eta_{\mu}(x)=\lim_{y\downarrow0}\eta_{\mu}(x+iy). \] The collection of functions $\{\eta_{\mu}:\mu\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}\}$ is described as follows. \begin{lem} \label{lem:description of all etas on the line}\cite{B-B-IMRN} Let $f:\mathbb{C}\backslash\mathbb{R}_{+}\to\mathbb{C}$ be an analytic function. Then there exists $\mu\in\mathcal{P}_{\mathbb{R}_{+}}$ such that $f=\eta_{\mu}$ if and only if the following conditions are satisfied: \begin{enumerate} \item $f(\overline{z})=\overline{f(z)}$ for every $z\in\mathbb{C}\setminus\mathbb{R}_{+}$, \item $\lim_{x\uparrow0}f(x)=0$, and \item $\arg f(z)\ge\arg z$, $z\in\mathbb{H},$ where the arguments are in $(0,\pi)$. \end{enumerate} Equality occurs in \emph{(3)} for some $z$ precisely when $\mu=\delta_{a}$ for some $a>0$, in which case $f(z)=\eta_{\mu}(z)=az$. \end{lem} In fact, condition (3) above can be replaced by $f(\mathbb{H})\subset\mathbb{H}$, as can be seen from Lemma \ref{lem:Schwarz analog}, which we may view as an analog of the Schwarz lemma for analytic functions in the unit disk. (This version of Lemma \ref{lem:description of all etas on the line} is useful in Lemma \ref{lem:trade a free convolution for another}.) \begin{lem} \label{lem:Structure of functions on omega}Suppose that $F:\mathbb{C}\backslash\mathbb{R}_{+}\to\mathbb{C}$ is analytic, $F(\mathbb{H})\subset\mathbb{H}$, $F((-\infty,0))\subset(-\infty,0)$, and \[ F(\overline{z})=\overline{F(z)},\qquad z\in\mathbb{C}\backslash\mathbb{R}_{+}. \] Then there exist constants $\alpha,\beta\in[0,+\infty)$ and a finite Borel measure $\rho$ on $(0,+\infty)$ such that $\int_{(0,+\infty)}d\rho(t)/t<+\infty$ and \[ F(z)=-\alpha+\beta z+\int_{(0,+\infty)}\frac{z(1+t^{2})}{t(t-z)}\,d\rho(t),\qquad z\in\mathbb{C}\backslash\mathbb{R}_{+}. \] \end{lem} \begin{proof} Since $F(\mathbb{H})\subset\mathbb{H}$, $F$ has a Nevanlinna representation of the form \[ F(z)=\alpha_{0}+\beta z+\int_{\mathbb{R}}\frac{1+zt}{t-z}\,d\rho(t),\qquad z\in\mathbb{H}, \] with $\alpha_{0}\in\mathbb{R},$ $\beta\in[0,+\infty)$, and a finite Borel measure $\rho$ on $\mathbb{R}$ (cf. \cite{Akhiezer}). Because $F$ is analytic and real-valued on $(-\infty,0)$, the measure $\rho$ is supported on $[0,+\infty).$ The formula \[ F'(z)=\beta+\int_{[0,+\infty)}\frac{1+t^{2}}{(t-z)^{2}}\,d\rho(t) \] shows that $F$ is increasing on $(-\infty,0)$. Now, $F((-\infty,0))\subset(-\infty,0)$, so $\lim_{z\uparrow0}F(z)\le0.$ The monotone convergence theorem yields now \[ \alpha_{0}+\int_{[0,+\infty)}\frac{1}{t}\,d\rho(t)=\lim_{z\uparrow0}F(z)\le0. \] In particular, $\rho(\{0\})=0$ and $\rho$ satisfies the condition in the statement. We set \[ \alpha=-\alpha_{0}-\int_{(0,+\infty)}\frac{1}{t}\,d\rho(t), \] and obtain the formula \[ F(z)=-\alpha+\beta z+\int_{(0,+\infty)}\left[\frac{1+zt}{t-z}-\frac{1}{t}\right]\,d\rho(t), \] valid in the entire region $\mathbb{C}\backslash\mathbb{R}_{+}$ by reflection. This is easily seen to be precisely the formula in the statement. \end{proof} Notation: $\Omega_{\alpha}=\{z\in\mathbb{C}\backslash\mathbb{R}_{+}:|\arg z|>\alpha\}$. Here $\alpha\in(0,\pi)$ and the argument takes values in $(-\pi,\pi)$. \begin{lem} \label{lem:Schwarz analog}Under the conditions of \textup{Lemma} \emph{\ref{lem:Structure of functions on omega}}, we have $F(\Omega_{\alpha})\subset\Omega_{\alpha}$ for every $\alpha\in(0,\pi)$. \end{lem} \begin{proof} It suffices to prove that $F(\Omega_{\alpha}\cap\mathbb{H})\subset\Omega_{\alpha}\cap\mathbb{H}$. Since $\Omega_{\alpha}\cap\mathbb{H}$ is a convex cone, the representation formula in Lemma \ref{lem:Structure of functions on omega} reduces the proof to the following three cases: \begin{enumerate} \item $F(z)=-1$, \item $F(z)=z$, \item $F(z)=z/(t-z)$ for some $t>0$. \end{enumerate} The result is trivial in the first two cases. In the third case one observes that $F$ maps $\Omega_{\alpha}\cap\mathbb{H}$ conformally onto a region $D_{\alpha}$ bounded by the interval $(-1,0)$ and by a circular arc $C$ joining $-1$ and $0$. Moreover, since $F'(0)>0$, the tangent to $C$ at $0$ is the line $\{\arg z=\alpha\}$. It follows immediately that $D_{\alpha}\cap\mathbb{H}\subset\Omega_{\alpha}\cap\mathbb{H}$. \end{proof} Mapping $\mathbb{C}\backslash\mathbb{R}_{+}$ conformally to a strip by the logarithm, we obtain another version of the Schwarz lemma as follows. We set $\mathcal{S}_{t}=\{z\in\mathbb{C}:|\Im z|<t\}$ for $t>0$. \begin{prop} Let $F:\mathcal{S}_{1}\to\mathcal{S}_{1}$ be an analytic function such $F(\mathcal{S}_{1}\cap\mathbb{H})\subset\mathcal{S}_{1}\cap\mathbb{H}$ and \[ F(\overline{z})=\overline{F(z)},\qquad z\in\mathcal{S}_{1}. \] Then $F(\mathcal{S}_{t})\subset\mathcal{S}_{t}$ for every $t\in(0,1)$. \end{prop} Given a measure $\mu\ne\delta_{0}$ in $\mathcal{P}_{\mathbb{R}_{+}}$, the function $\eta_{\mu}$ is conformal in an open set $U$ containing some interval $(\beta,0)$ with $\beta<0$, and the restriction $\eta_{\mu}|U$ has an inverse $\eta_{\mu}^{\langle-1\rangle}$ defined in an open set containing an interval of the form $(\alpha,0)$ with $\alpha<0$. The free multiplicative convolution $\mu_{1}\boxtimes\mu_{2}$ of two measures $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}$ is the unique measure $\mu\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}$ that satisfies the identity \begin{equation} z\eta_{\mu}^{\langle-1\rangle}(z)=\eta_{\mu_{1}}^{\langle-1\rangle}(z)\eta_{\mu_{2}}^{\langle-1\rangle}(z)\label{eq:defining boxtimes} \end{equation} for $z$ in some open set containing an interval $(\alpha,0)$ with $\alpha<0$ (see \cite{BV-unbounded}). (We also have $\delta_{0}\boxtimes\mu=\delta_{0}$ for every $\mu\in\mathcal{P}_{\mathbb{R}_{+}}.$) Based on the characterization of $\eta$-transform, another approach to free convolution is given by the following reformulation of the subordination results in \cite{Bi-free inc}. \begin{thm} \label{thm:subordination on the line (mult)}For every $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\},$ there exist unique $\rho_{1},\rho_{2}\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}$ such that \[ \eta_{\mu_{1}}(\eta_{\rho_{1}}(z))=\eta_{\mu_{2}}(\eta_{\rho_{2}}(z))=\frac{\eta_{\rho_{1}}(z)\eta_{\rho_{2}}(z)}{z},\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}. \] Moreover, we have $\eta_{\mu_{1}\boxtimes\mu_{2}}=\eta_{\mu_{1}}\circ\eta_{\rho_{1}}$. If $\mu_{1}$ and $\mu_{2}$ are nondegenerate \emph{(}that is, $\mu_{1}$ and $\mu_{2}$ are not point masses\emph{)}, then so are $\rho_{1}$ and $\rho_{2}$. \end{thm} We recall that a measure $\mu\in\mathcal{P}_{\mathbb{R}_{+}}$ is said to be $\boxtimes$-infinitely divisible if there exist measures $\{\mu_{n}\}_{n\in\mathbb{N}}\subset\mathcal{P}_{\mathbb{R}_{+}}$ satisfying the identities \[ \underbrace{\mu_{n}\boxtimes\cdots\boxtimes\mu_{n}}_{n\text{ times}}=\mu,\quad n\in\mathbb{N}. \] Obviously, $\delta_{0}$ is $\boxtimes$-infinitely divisible; one can take $\mu_{n}=\delta_{0}$. It was shown in \cite{vo-mul,BV-unbounded} that a measure $\mu\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}$ is $\boxtimes$-infinitely divisible precisely when the inverse $\eta_{\mu}^{\langle-1\rangle}$ continues analytically to $\mathbb{C}\backslash\mathbb{R}_{+}$ and this analytic continuation has the special form \begin{equation} \Phi(z)=\gamma z\exp\left[\int_{[0,+\infty]}\frac{1+tz}{z-t}\,d\sigma(t)\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+},\label{eq:extension of eta inverse (line)} \end{equation} for some $\gamma>0$ and some finite Borel measure $\sigma$ on the one point compactification of $\mathbb{R}_{+}$. The fraction in the above formula must be interpreted as $-z$ when $t=+\infty$. This is, of course, an analog of the classical L\'evy-Hin\v cin formula. The pair $(\gamma,\sigma)$ is uniquely determined by $\mu$, and every such pair corresponds with a unique $\boxtimes$-infinitely divisible measure, sometimes denoted $\nu_{\boxtimes}^{\gamma,\sigma}$. Another description of the class of functions defined by (\ref{eq:extension of eta inverse (line)}) is as follows: \[ \Phi(z)=z\exp(u(z)), \] where $u:\mathbb{C}\backslash\mathbb{R}_{+}\rightarrow\mathbb{C}$ is an analytic function such that $u(\mathbb{H})\subset-\mathbb{H}$ and $u(\overline{z})=\overline{u(z)}$ for all $z\in\mathbb{C}\setminus\mathbb{R}_{+}$. This equivalent description is used in Lemma \ref{lem:omega is inf-div}. Suppose now that $\mu\in\mathcal{P}_{\mathbb{R}_{+}}$ is a nondegenerate $\boxtimes$-infinitely divisible measure and that $\eta_{\mu}^{\langle-1\rangle}$ has the analytic continuation given in (\ref{eq:extension of eta inverse (line)}). The equation $\Phi(\eta_{\mu}(z))=z$ holds in some open set and therefore it holds on the entire $\mathbb{C}\backslash\mathbb{R}_{+}$ by analytic continuation. In particular, $\eta_{\mu}$ maps $\mathbb{C}\backslash\mathbb{R}_{+}$ conformally onto a domain $\Omega_{\mu}\subset\mathbb{C}\backslash\mathbb{R}_{+}$ that is symmetric relative to the real line. The domain $\Omega_{\mu}$ is easily identified as the connected component of the set $\{z\in\mathbb{C}\backslash\mathbb{R}_{+}:\Phi(z)\in\mathbb{C}\backslash\mathbb{R}_{+}\}$ containing $(-\infty,0)$. This set and its boundary were thoroughly investigated in \cite{huang-zhong,huang-wang}, and the results are important in the sequel. Because of the symmetry of $\Omega_{\mu}$, we consider only the upper half of $\Omega_{\mu}$, namely, $\Omega_{\mu}\cap\mathbb{H}$. A simple calculation shows that \begin{equation} \Phi(re^{i\theta})=\gamma\exp[u(re^{i\theta})+iv(re^{i\theta})],\label{eq:u and v} \end{equation} where the real and imaginary parts $u$ and $v$ are given by \begin{equation} u(re^{i\theta})=\log r+\int_{[0,+\infty]}\frac{(1-t^{2})r\cos\theta+t(r^{2}-1)}{|re^{i\theta}-t|^{2}}\,d\sigma(t),\label{eq:u(z) in polar terms} \end{equation} and \[ v(re^{i\theta})=\theta\left[1-\frac{r\sin\theta}{\theta}\int_{[0,+\infty]}\frac{1+t^{2}}{|re^{i\theta}-t|^{2}}\,d\sigma(t)\right], \] for $r>0$ and $\theta\in(0,\pi)$. As noted in \cite{huang-zhong,huang-wang}, a remarkable situation occurs: for fixed $r>0$, the function \begin{equation} I_{r}(\theta)=\frac{r\sin\theta}{\theta}\int_{[0,+\infty]}\frac{1+t^{2}}{|re^{i\theta}-t|^{2}}\,d\sigma(t),\quad\theta\in(0,\pi],\label{eq:I_r except 0} \end{equation} is continuous, strictly decreasing, and $I_{r}(\pi)=0$. Thus, the set $\{\theta\in(0,\pi):I_{r}(\theta)<1\}$ is an interval, say \begin{equation} \{\theta\in(0,\pi):I_{r}(\theta)<1\}=(f(r),\pi).\label{eq:def of f half-line} \end{equation} The value $f(r)$ is $0$ precisely when the limit \begin{equation} I_{r}(0)=\lim_{\theta\downarrow0}I_{r}(\theta)=r\int_{[0,+\infty]}\frac{1+t^{2}}{(r-t)^{2}}\,d\sigma(t)\label{eq:I_r at zero} \end{equation} is at most $1$. Otherwise, we have $I_{r}(f(r))=1$. The following statement summarizes results from \cite{huang-zhong,huang-wang}. \begin{thm} \label{thm:inversion for half-line} Let $\mu\in\mathcal{P}_{\mathbb{R}_{+}}$ be a nondegenerate $\boxtimes$-infinitely divisible measure, let $\Phi$ defined by \emph{(\ref{eq:extension of eta inverse (line)})} be the analytic continuation of $\eta_{\mu}^{\langle-1\rangle}$, let $I_{r}:[0,\pi]\to(0,+\infty]$ be defined by \emph{(\ref{eq:I_r except 0}) }and \emph{(\ref{eq:I_r at zero})}, and let $f:(0,+\infty)\to[0,\pi)$ be defined by \emph{(\ref{eq:def of f half-line})}. Then\emph{:} \begin{enumerate} \item $\eta_{\mu}$ maps $\mathbb{H}$ conformally onto \[ \Omega_{\mu}\cap\mathbb{H}=\{re^{i\theta}:r>0,\theta\in(f(r),\pi)\}. \] \item The function $f$ is continuous on $(0,+\infty)$ and continuously differentiable on the open set $\{r:f(r)>0\}$. \item The topological boundary of the set $\Omega_{\mu}\cap\mathbb{H}$ is $(-\infty,0]\cup\{re^{if(r)}:r>0\}$. \item $\eta_{\mu}$ extends continuously to the closure $\overline{\mathbb{H}}$, $\Phi$ extends continuously to the closure $\overline{\Omega_{\mu}\cap\mathbb{H}}$, and these extensions are homeomorphisms, inverse to each other. In particular, the function $h:(0,+\infty)\to(0,+\infty)$ defined by \[ h(r)=\Phi(re^{if(r)}),\quad r>0, \] is an increasing homeomorphism from $(0,+\infty)$ onto $(0,+\infty)$ and the image $\eta_{\mu}((0,+\infty))$ is parametrized implicitly as \begin{equation} \eta_{\mu}(h(r))=re^{if(r)},\quad r>0.\label{eq:param of eta_mu(pos line)} \end{equation} \end{enumerate} \end{thm} It is known that $\mu(\{0\})=0$ for every $\boxtimes$-infinitely divisible measure $\mu\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}$. For such a measure $\mu,$ we can define a measure $\mu_{*}\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}$ such that $d\mu_{*}(t)=d\mu(1/t)$. An easy calculation yields the identities \[ \psi_{\mu_{*}}(z)=-1-\psi_{\mu}(1/z),\quad\eta_{\mu_{*}}(z)=\frac{1}{\eta_{\mu}(1/z)},\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] and therefore \[ \eta_{\mu_{*}}^{\langle-1\rangle}(z)=\frac{1}{\eta_{\mu}^{\langle-1\rangle}(1/z)} \] for $z$ in some open set containing $(-\infty,0)$. It follows that $\eta_{\mu_{*}}^{\langle-1\rangle}$ has an analytic continuation to $\mathbb{C}\backslash\mathbb{R}_{+}$. In fact, if $\Phi$ is the continuation of $\eta_{\mu}^{\langle-1\rangle}$ given by (\ref{eq:extension of eta inverse (line)}), then the function \[ \Phi_{*}(z)=\frac{1}{\Phi(1/z)}=\frac{1}{\gamma}z\exp\left[\int_{[0,+\infty]}\frac{1+tz}{z-t}\,d\sigma_{*}(t)\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] extends $\eta_{\mu_{*}}^{\langle-1\rangle}$, where $d\sigma_{*}(t)=d\sigma(1/t)$ with the convention that $1/0=+\infty$ and $1/+\infty=0$. Thus, $\mu_{*}$ is also infinitely divisible, and the boundary of $\Omega_{\mu_{*}}\cap\mathbb{H}$ is described as above using a continuous function $f_{*}:(0,+\infty)\to[0,\pi)$. This function and the associated homeomorphism $h_{*}(r)=\Phi_{*}(re^{if_{*}(r)})$ are easily seen to satisfy the identities \[ f_{*}(r)=f(1/r),\quad h_{*}(r)=\frac{1}{h(1/r)},\quad r\in(0,+\infty). \] The following result gives estimates for the growth of $h$ at $0$ and $+\infty$. \begin{prop} \label{prop:endpoint estimates for h} Let $\mu,\Phi$, and $h$ be as in \textup{Theorem}\emph{ \ref{thm:inversion for half-line}}. Then \[ h(r)\le\gamma r\exp(\sigma([0,+\infty])+2),\quad r\in(0,1/4), \] and \[ h(r)\ge\gamma r\exp(-\sigma([0,+\infty])-2),\quad r\in(4,+\infty). \] In particular, $\lim_{r\downarrow0}h(r)=0$. \end{prop} \begin{proof} Suppose for the moment that the first inequality was proved. Applying the result to the measure $\mu_{*}$, we see that \begin{align*} \frac{1}{h(1/r)}=h_{*}(r) & \le\frac{1}{\gamma}r\exp(\sigma_{*}([0,+\infty])+2)\\ & =\frac{1}{\gamma}r\exp(\sigma([0,+\infty])+2) \end{align*} for $r<1/4$. The second inequality follows after replacing $r$ by $1/r$. Fix now $r\in(0,1/4)$, and use relations (\ref{eq:u and v}), (\ref{eq:u(z) in polar terms}), and the fact that $r^{2}-1\le0$ to deduce the inequality \[ |\Phi(re^{i\theta})|\le\gamma r\exp\left[r\cos\theta\int_{[0,+\infty]}\frac{1-t^{2}}{|re^{i\theta}-t|^{2}}\,d\sigma(t)\right],\quad\theta\in(0,\pi). \] We distinguish two cases, according to whether $f(r)<\pi/2$ or $f(r)\ge\pi/2$. In the first case, we have $I_{r}(f(r))\le1$ and hence \begin{align*} \left|r\cos\theta\int_{[0,+\infty]}\frac{1-t^{2}}{|re^{i\theta}-t|^{2}}\,d\sigma(t)\right| & \le r\int_{[0,+\infty]}\frac{1+t^{2}}{|re^{i\theta}-t|^{2}}\,d\sigma(t)\\ & =\frac{\theta}{\sin\theta}I_{r}(\theta)\le\frac{\pi}{2}I_{r}(f(r))<2 \end{align*} for $\theta\in(f(r),\pi/2)$. It follows that $|h(r)|=\lim_{\theta\downarrow f(r)}|\Phi(re^{i\theta})|\le\gamma re^{2}$, verifying the first inequality in this case. In the second case, we observe, for $\theta=f(r)$, that \[ r\cos\theta\int_{[0,1]}\frac{1-t^{2}}{|re^{i\theta}-t|^{2}}\,d\sigma(t)\le0, \] and thus \begin{align*} r\cos\theta\int_{[0,+\infty]}\frac{1-t^{2}}{|re^{i\theta}-t|^{2}}\,d\sigma(t) & \le r\int_{[1,+\infty]}\frac{t^{2}-1}{|re^{i\theta}-t|^{2}}\,d\sigma(t)\\ & \le r\int_{[1,+\infty]}\frac{t^{2}-1}{|t-\frac{1}{2}|^{2}}\,d\sigma(t)\\ & \le r\int_{[1,+\infty]}2\,d\sigma(t)\le\sigma([0,+\infty]). \end{align*} This verifies the inequality in the second case and concludes the proof. \end{proof} The continuity of the function $\Phi$ on some parts of $\Omega_{\mu}$ can be established as follows. \begin{lem} \label{lem:continuity of u (half line)}Let $\mu\in\mathcal{P}_{\mathbb{R}_{+}}$ be a nondegenerate $\boxtimes$-infinitely divisible measure, and let $\Phi$ defined by \emph{(\ref{eq:extension of eta inverse (line)})} be the analytic continuation of $\eta_{\mu}^{\langle-1\rangle}$. Set \[ u(z)=\int_{[0,+\infty]}\frac{1+tz}{z-t}\,d\sigma(t),\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}. \] Suppose that $z_{j}=r_{j}e^{i\theta_{j}}\in\Omega_{\mu}\cap\mathbb{H}$ and that $\theta_{j}\leq\pi/2$ for $j=1,2$. Then \[ |u(z_{1})-u(z_{2})|\le\frac{\pi}{2}\frac{|z_{1}-z_{2}|}{\sqrt{|z_{1}z_{2}|}}. \] \end{lem} \begin{proof} We have $\theta_{j}\in(f(r_{j}),\pi/2]$, $j=1,2$. In particular, \[ I_{r_{j}}(\theta_{j})\le I_{r_{j}}(f(r_{j}))\le1,\quad j=1,2. \] Then \begin{align*} |u(z_{1})-u(z_{2})| & =\left|(z_{1}-z_{2})\int_{[0,+\infty]}\frac{(1+t^{2})^{1/2}}{z_{1}-t}\frac{(1+t^{2})^{1/2}}{z_{2}-t}\,d\sigma(t)\right|\\ & \le|z_{1}-z_{2}|\left[\int_{[0,+\infty]}\frac{1+t^{2}}{|z_{1}-t|^{2}}\,d\sigma(t)\right]^{1/2}\left[\int_{[0,+\infty]}\frac{1+t^{2}}{|z_{2}-t|^{2}}\,d\sigma(t)\right]^{1/2}\\ & =|z_{1}-z_{2}|\left[\frac{\theta_{1}}{r_{1}\sin\theta_{1}}I_{r_{1}}(\theta_{1})\right]^{1/2}\left[\frac{\theta_{2}}{r_{2}\sin\theta_{2}}I_{r_{2}}(\theta_{2})\right]^{1/2}\le\frac{\pi}{2}\frac{|z_{1}-z_{2}|}{\sqrt{r_{1}r_{2}}}, \end{align*} where we used the Schwarz inequality. \end{proof} We conclude this section with a few known facts about convolution powers. Given a measure $\nu\in\mathcal{P}_{\mathbb{R}_{+}}\backslash\{\delta_{0}\}$ and $k\in\mathbb{N},$ we use the notation \[ \nu^{\boxtimes k}=\underbrace{\nu\boxtimes\cdots\boxtimes\nu}_{k\text{ times}} \] for the free multiplicative convolution of $k$ copies of $\nu$. By Theorem \ref{thm:subordination on the line (mult)}, there exists a measure $\mu\in\mathcal{P}_{\mathbb{R}_{+}}$ such that $\eta_{\nu^{\boxtimes k}}=\eta_{\nu}\circ\eta_{\mu}$. It is shown in \cite{B-B-IMRN} that \[ \Phi(\eta_{\mu}(z))=z,\quad z\in\mathbb{C}\backslash\mathbb{R_{+}}, \] where \[ \Phi(z)=\frac{z^{k}}{\eta_{\nu}(z)^{k-1}},\quad z\in\mathbb{C}\backslash\mathbb{R_{+}}, \] is easily seen to have the form (\ref{eq:extension of eta inverse (line)}). As seen earlier, this means that $\mu$ is in fact $\boxtimes$-infinitely divisible, and therefore $\eta_{\mu}$ has a continuous (and injective) extension to $(0,+\infty)$. The relation between $\eta_{\mu}$ and $\nu^{\boxtimes k}$ can also be written as \begin{equation} \eta_{\mu}(z)^{k}=z\eta_{\nu^{\boxtimes k}}(z)^{k-1},\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}.\label{eq:eta of subordination, halflin} \end{equation} As observed in \cite{B-B-IMRN, Ch-Gotze}, this equality implies that $\eta_{\nu^{\boxtimes k}}$ also has a continuous extension to $(0,+\infty)$ and (\ref{eq:eta of subordination, halflin}) remains true for real values of $z$. We use below this identity under the equivalent form \[ x\eta_{\mu}(1/x)^{k}=\eta_{\nu^{\boxtimes k}}(1/x)^{k-1},\quad x\in(0,+\infty). \] This is of interest because it allows us to calculate the density $d\nu^{\boxtimes k}/dt$ in terms of the density of $\mu$. Indeed, rewriting the above identity as \[ \eta_{\mu}(1/x)\left[x\eta_{\mu}(1/x)\right]^{1/(k-1)}=\eta_{\nu^{\boxtimes k}}(1/x),\quad x\in(0,+\infty), \] one may be able to argue (as we do in Section \ref{sec:Superconvergence-in pos line}) that $\eta_{\nu^{\boxtimes k}}(1/x)$ is very close to $\eta_{\mu}(1/x)$ if $k$ is large, and then (\ref{eq:density vs eta}) allows us to conclude that these two measures have close densities. \section{Superconvergence in $\mathcal{P}_{\mathbb{R}_{+}}$\label{sec:Superconvergence-in pos line}} We begin by studying the weak convergence of a sequence of nondegenerate $\boxtimes$-infinitely divisible measures in $\mathcal{P_{\mathbb{R}_{+}}}$. Thus, suppose that $\text{\ensuremath{\gamma} and }\{\gamma_{n}\}_{n\in\mathbb{N}}$ are positive numbers, $\sigma$ and $\{\sigma_{n}\}_{n\in\mathbb{N}}$ are finite, nonzero Borel measures on $[0,+\infty]$, $\mu$ and $\{\mu_{n}\}_{n\in\mathbb{N}}$ are nondegenerate $\boxtimes$-infinitely divisible measures in $\mathcal{P_{\mathbb{R}_{+}}}$, and the inverses $\eta_{\mu}^{\langle-1\rangle},\{\eta_{\mu_{n}}^{\langle-1\rangle}\}_{n\in\mathbb{N}}$ have analytic continuations $\Phi,\{\Phi_{n}\}_{n\in\mathbb{N}}$ given by (\ref{eq:extension of eta inverse (line)}) for $\mu$ and by analogous formulas for $\mu_{n}$ (with $\gamma_{n}$ and $\sigma_{n}$ in place of $\gamma$ and $\sigma$). The sequence $\{\mu_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$ if and only if $\{\sigma_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\sigma$ and $\lim_{n\to\infty}\gamma_{n}=\gamma$. (This fact is implicit in the proof of Theorem 4.3 in \cite{BPata-mult-laws}.) When these conditions are satisfied, it is also true that the sequences $\{\eta_{\mu_{n}}\}_{n\in\mathbb{N}}$ and $\{\Phi_{\mu_{n}}\}_{n\in\mathbb{N}}$ converge to $\eta_{\mu}$ and $\Phi_{\mu}$, respectively, and the convergence is uniform on compact subsets of $\mathbb{C}\backslash\mathbb{R}_{+}$. In order to show that superconvergence occurs, we need to understand the behavior of the functions $f$ and $h$ defined in Section \ref{sec:Free-multiplicative-convolution} in relation to $\mu$ and that of the functions $f_{n}$ and $h_{n}$ associated to $\mu_{n}$. By Proposition \ref{prop:endpoint estimates for h}, it is understood that $h_{n}$ and $h$ are extended to $\mathbb{R}_{+}$ so that $h(0)=h_{n}(0)=0$. \begin{lem} \label{lem:f_n tends to f etc}With the above notation, suppose that the sequence $\{\mu_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$. Then\emph{:} \begin{enumerate} \item The sequence $\{f_{n}\}_{n\in\mathbb{N}}$ converges to $f$ uniformly on compact subsets of $(0,+\infty)$. \item The sequence $\{h_{n}\}_{n\in\mathbb{N}}$ converges to $h$ uniformly on compact subsets of $\mathbb{R}_{+}.$ \item The sequence of inverses $\{h_{n}^{\langle-1\rangle}\}_{n\in\mathbb{N}}$ converges to $h^{\langle-1\rangle}$ uniformly on compact subsets of $(0,+\infty)$. \item The sequence $\{f_{n}\circ h_{n}^{\langle-1\rangle}\}_{n\in\mathbb{N}}$ converges to $f\circ h^{\langle-1\rangle}$ uniformly on compact subsets of $(0,+\infty)$. \item The sequence $\{\eta_{\mu_{n}}\}_{n\in\mathbb{N}}$ converges to $\eta_{\mu}$ uniformly on compact subsets of $(0,+\infty)$. \end{enumerate} \end{lem} \begin{proof} Fix $r>0$, let $\varepsilon\in(0,\pi-f(r))$, and let $J=[r-\delta,r+\delta]$ be such that $|f(s)-f(r)|<\varepsilon$ for every $s\in J$. Observe that the compact set \[ C=\{se^{i\theta}:\theta\in[f(r)+\varepsilon,\pi],s\in J\} \] has the property that $\Phi(C)\subset\mathbb{H}$. Since $\Phi_{n}$ converges to $\Phi$ uniformly on $C$, it follows that $\Phi_{n}(C)\subset\mathbb{H}$ for sufficiently large $n$, and thus $f_{n}(s)<f(r)+\varepsilon<f(s)+2\varepsilon$, $s\in J$, for such $n$. This proves (1) in case $f(r)=0$. If $f(r)>0$, there exists a positive angle $\theta_{0}\in(f(r)-\varepsilon,f(r))$ such that $\Phi(re^{i\theta_{0}})\in-\mathbb{H}.$ Shrink the number $\delta$ such that $\Phi(se^{i\theta_{0}})\in-\mathbb{H}$ for every $s\in J$. It follows from uniform convergence that $\Phi_{n}(se^{i\theta_{0}})\in-\mathbb{H},\ s\in J$, for sufficiently large $n$, and thus $f_{n}(s)>\theta_{0}>f(r)-\varepsilon>f(s)-2\varepsilon,\ s\in J$, thus completing the proof of (1). For (2) and (3), it suffices to prove pointwise convergence because pointwise convergence of continuous increasing functions to a continuous limit is automatically locally uniform. Since convergence obviously holds at $0$, fix $r>0$. Suppose first that $f(r)>0$. In this case, $se^{if(s)}\in\mathbb{H}$ for $s$ in some compact neighborhood of $r$, and hence $\Phi_{n}$ converges uniformly to $\Phi$ in a neighborhood of $re^{if(r)}$. By (1), $\lim_{n\to\infty}f_{n}(r)=f(r)$, and the local uniform convergence of $\Phi_{n}$ yields \[ h(r)=\Phi(re^{if(r)})=\lim_{n\to\infty}\Phi_{n}(re^{if_{n}(r)})=\lim_{n\to\infty}h_{n}(r), \] thus proving (1) in this case. Suppose now that $f(r)=0$, and thus $\lim_{n\to\infty}f_{n}(r)=0.$ Assume, for simplicity, that $f_{n}(r)<1$ for every $n\in\mathbb{N},$ and define functions $\Psi_{n}:(0,\pi/2-1]\to\mathbb{C}$ by setting \[ \Psi_{n}(\theta)=\Phi_{n}(re^{i\theta+f_{n}(r)}),\qquad0<\theta\leq\frac{\pi}{2}-1,\;n\in\mathbb{N}. \] It follows from Lemma \ref{lem:continuity of u (half line)} that the functions $\Psi_{n}$ are uniformly equicontinuous. The local uniform convergence of $\Phi_{n}$ to $\Phi$ shows that $\Psi_{n}$ converges pointwise to $\Phi(re^{i\theta})$. Now, both $\Psi_{n}$ and $\Phi(re^{i\theta})$ extend continuously to $\theta=0$ with \[ \Psi_{n}(0)=h_{n}(r),\;\Phi(r)=h(r). \] The uniform equicontinuity of $\Psi_{n}$ implies that the convergence also holds (even uniformly) for these continuous extensions, and at $\theta=0$ this yields the desired equality $\lim_{n\to\infty}h_{n}(r)=h(r)$. The pointwise convergence of $h_{n}^{\langle-1\rangle}$ to $h^{\langle-1\rangle}$ follows directly from (2). Indeed, suppose that $t_{0}>0$, $s_{0}=h(t_{0})$, and $0<\varepsilon<t_{0}$. We have $\lim_{n\to\infty}h_{n}(t_{0}-\varepsilon)=h(t_{0}-\varepsilon),$ $\lim_{n\to\infty}h_{n}(t_{0}+\varepsilon)=h(t_{0}+\varepsilon)$, and the open interval $(h(t_{0}-\varepsilon),h(t_{0}+\varepsilon))$ contains $s_{0}$. It follows that the interval $(h_{n}(t_{0}-\varepsilon),h_{n}(t_{0}+\varepsilon))$ also contains $s_{0}$ for sufficiently large $n$, and thus $h_{n}^{\langle-1\rangle}(s_{0})\in(t_{0}-\varepsilon,t_{0}+\varepsilon)$ for such $n$. Since $\varepsilon$ is arbitrarily small, we have $\lim_{n\to\infty}h_{n}^{\langle-1\rangle}(s_{0})=t_{0}=h^{\langle-1\rangle}(s_{0})$. Finally, (4) and (5) follow from (1) and (3) (see \cite[Theorem XII.2.2]{dug}). \end{proof} We are now ready to show that the weak convergence of infinitely divisible measures implies the convergence of the densities of these measures, locally uniformly outside a singleton. We first identify the density of an infinitely divisible $\mu$, for which $\eta_{\mu}^{\langle-1\rangle}$ has the continuation $\Phi$ in (\ref{eq:extension of eta inverse (line)}), in terms of the functions $f$ and $h$. The fact that the extension of $\eta_{\mu}$ to $(0,+\infty)$ is continuous and injective shows that $A_{\mu}=\{t\in(0,+\infty)\mathbb{:}\:\eta_{\mu}(t)=1\}$ is either empty or a singleton. It is clear from the definition of $f$ that the set $A_{\mu}$ is nonempty precisely when $I_{1}(0)\le1$. If this condition is satisfied, the set $A_{\mu}$ consists of $h(1)$ and $\mu(\{1/h(1)\})=1-I_{1}(0)$. Accordingly, we denote $D_{\mu}=\{1/h(1)\}$ if $I_{1}(0)\le1$, and $D_{\mu}=\phi$ otherwise. It follows that $\mu$ is absolutely continuous with a continuous density $p_{\mu}=d\mu/dt$ on $(0,+\infty)\backslash D_{\mu}$. Equations (\ref{eq:param of eta_mu(pos line)}) and (\ref{eq:density vs eta}) give the implicit formula \begin{equation} \frac{1}{h(r)}p_{\mu}\left(\frac{1}{h(r)}\right)=\frac{1}{\pi}\frac{r\sin f(r)}{|1-re^{if(r)}|^{2}},\qquad r>0,\;h(r)\notin A_{\mu}.\label{eq:the density in terms of h and r} \end{equation} We record for further use a simple consequence of (\ref{eq:the density in terms of h and r}). For fixed $r$, the function $|1-re^{i\theta}|,$ $\theta\in\mathbb{R}$, achieves its minimum at $\theta=0$, and thus \[ \frac{1}{h(r)}p_{\mu}\left(\frac{1}{h(r)}\right)\le\frac{r}{\pi(1-r)^{2}},\quad r\in(0,+\infty)\setminus\left\{ 1\right\} , \] or, equivalently, \begin{equation} tp_{\mu}(t)\le\frac{h^{\langle-1\rangle}(1/t)}{\pi(1-h^{\langle-1\rangle}(1/t))^{2}},\quad t\in(0,+\infty)\setminus D_{\mu}.\label{eq:xp(x) estimated (for endpoints)} \end{equation} \begin{prop} \label{prop:unif convergence of inf div densities pos line} Let $\mu$ and $\{\mu_{n}\}_{n\in\mathbb{N}}$ be nondegenerate $\boxtimes$-infinitely divisible measures in $\mathcal{P}_{\mathbb{R}_{+}}$ and let $U$ be an arbitrary open neighborhood of the set $D_{\mu}$; if $D_{\mu}=\varnothing$, take $U=\varnothing$. Then $D_{\mu_{n}}\subset U$ for sufficiently large $n$, and the functions $tp_{\mu_{n}}(t)$ converge to $tp_{\mu}(t)$ uniformly for $t\in(0,+\infty)\backslash U$. \end{prop} \begin{proof} We use the notation established above: $\eta_{\mu_{n}}^{\langle-1\rangle}$ has the analytic continuation $\Phi_{n}$ determined by the parameters $\gamma_{n}$ and $\sigma_{n}$, and $f_{n},h_{n}$ play the roles of $f,h$ for the measure $\mu_{n}$. The relation $D_{\mu_{n}}=\left\{ 1/h_{n}(1)\right\} \subset U$ for large $n$ follows directly from Lemma 3.1(2). We focus on the proof of uniform convergence. We show first that it suffices to prove that $tp_{\mu_{n}}(t)$ converges to $tp_{\mu}(t)$ locally uniformly on $(0,+\infty)\backslash U$. For this purpose, fix $\varepsilon>0$ and choose $\alpha,\beta\in(0,+\infty)$ such that \[ \frac{x}{\pi(1-x)^{2}}<\varepsilon,\quad x\in(0,+\infty)\backslash[\alpha,\beta]. \] Since $h^{\langle-1\rangle}$ is an increasing homeomorphism of $(0,+\infty)$, there exist $a,b\in(0,+\infty)$ such that $h^{\langle-1\rangle}(1/b)<\alpha$ and $h^{\langle-1\rangle}(1/a)>\beta$. Lemma \ref{lem:f_n tends to f etc} shows that there exists $N\in\mathbb{N}$ such that $h_{n}^{\langle-1\rangle}(1/b)<\alpha$ and $h_{n}^{\langle-1\rangle}(1/a)>\beta$ for $n\ge N,$ and hence \[ tp_{\mu_{n}}(t),tp_{\mu}(t)<\varepsilon,\quad t\in(0,+\infty)\backslash[a,b], \] by (\ref{eq:xp(x) estimated (for endpoints)}). It suffices therefore to prove uniform convergence on $[a,b]\backslash U$, and this would follow from local uniform convergence on $(0,+\infty)\backslash D_{\mu}$. For this purpose, it is convenient to write (\ref{eq:the density in terms of h and r}) in the explicit form \begin{equation} tp_{\mu}(t)=\frac{1}{\pi}\frac{h^{\langle-1\rangle}(1/t)\sin f(h^{\langle-1\rangle}(1/t))}{|1-h^{\langle-1\rangle}(1/t)e^{if(h^{\langle-1\rangle}(1/t))}|^{2}},\quad t\notin D_{\mu}.\label{eq: explicit 3.1} \end{equation} Suppose that $t_{0}\notin D_{\mu}$, and choose a compact neighborhood $W$ of $t_{0}$ such that \[ 1-h^{\langle-1\rangle}(1/t)e^{if(h^{\langle-1\rangle}(1/t))}\ne0,\quad t\in W. \] Lemma \ref{lem:f_n tends to f etc} shows that there exists an integer $N$ such that \[ 1-h_{n}^{\langle-1\rangle}(1/t)e^{if_{n}(h_{n}^{\langle-1\rangle}(1/t))}\ne0,\quad t\in W,\;n\ge N, \] and then we conclude from (\ref{eq: explicit 3.1}) (applied to $\mu_{n}$), and from Lemma \ref{lem:f_n tends to f etc}, that $tp_{n}(t)$ converges to $tp_{\mu}(t)$ uniformly on $W$. \end{proof} An immediate consequence is as follows. \begin{cor} \label{cor:uniform conv from xp(x)} Under the conditions of \textup{Proposition} \emph{\ref{prop:unif convergence of inf div densities pos line},} the sequence $\{p_{\mu_{n}}\}_{n\in\mathbb{N}}$ converges to $p$ locally uniformly on $(0,+\infty)\backslash D_{\mu}$. \end{cor} We can now prove a general version of superconvergence. \begin{thm} \label{thm:superconvergence Rplus}Let $k_{1}<k_{2}<\cdots$ be positive integers, and let $\mu$ and $\{\nu_{n}\}_{n\in\mathbb{N}}$ be nondegenerate measures in $\mathcal{P}_{\mathbb{R}_{+}}$ such that $\mu$ is $\boxtimes$-infinitely divisible. Suppose that the sequence $\{\nu_{n}^{\boxtimes k_{n}}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$. Let $K\subset(0,+\infty)\backslash D_{\mu}$ be an arbitrary compact set. Then $\nu_{n}^{\boxtimes k_{n}}$ is absolutely continuous on $K$ for sufficiently large $n$, and the sequence $\{d\nu_{n}^{\boxtimes k_{n}}/dt\}_{n\in\mathbb{N}}$ converges to $d\mu/dt$ uniformly on $K$. \end{thm} \begin{proof} As noted at the end of Section 2, there exist nondegenerate $\boxtimes$-infinitely divisible measures $\mu_{n}\in\mathcal{P}_{\mathbb{R}_{+}}$ such that $\eta_{\nu_{n}^{\boxtimes k_{n}}}=\eta_{\nu_{n}}\circ\eta_{\mu_{n}}$ and \begin{equation} \eta_{\mu_{n}}(1/x)\left[x\eta_{\mu_{n}}(1/x)\right]^{1/(k_{n}-1)}=\eta_{\nu_{n}^{\boxtimes k_{n}}}(1/x),\quad x\in(0,+\infty),\;n\in\mathbb{N}.\label{eq:even more eta vs eta} \end{equation} It is known from \cite{BPata-mult-laws} that the sequence $\{\nu_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\delta_{1}$, and therefore the functions $\eta_{\nu_{n}}(z)$ converge to $z$ uniformly on compact subsets of $\mathbb{C}\backslash\mathbb{R}_{+}$. Similarly, the functions $\eta_{\nu_{n}}^{\langle-1\rangle}(z)$ converge uniformly to $z$ on compact subsets of $(-\infty,0).$ Since $\eta_{\nu_{n}^{\boxtimes k_{n}}}$ converges to $\eta_{\mu}$ uniformly on compact subsets of $\mathbb{C}\backslash\mathbb{R}_{+}$, we deduce that $\eta_{\mu_{n}}(z)=\eta_{\nu_{n}}^{\langle-1\rangle}(\eta_{\nu_{n}^{\boxtimes k_{n}}}(z))$ converge to $\eta_{\mu}$ uniformly on compact subsets of $(-\infty,0)$. It follows that the sequence $\{\mu_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$. By Lemma \ref{lem:f_n tends to f etc}(5), $\eta_{\mu_{n}}(x)$ tends to $\eta_{\mu}(x)$ uniformly on compact subsets of $(0,+\infty)$, and therefore \[ \left[x\eta_{\mu_{n}}(1/x)\right]^{1/(k_{n}-1)} \] converges to $1$ uniformly on compact subsets of $(0,+\infty)$ since $k_{n}\to\infty$. Then (\ref{eq:even more eta vs eta}) shows that $\eta_{\nu_{n}^{\boxtimes k_{n}}}(1/x)$ converges to $\eta_{\mu}(1/x)$ uniformly on compact subsets of $(0,+\infty)$. The conclusion of the theorem follows now from (\ref{eq:density vs eta}) applied to these measures, as in the proof of Proposition \ref{prop:unif convergence of inf div densities pos line}. \end{proof} \section{Cusp behavior in $\mathcal{P}_{\mathbb{R}_{+}}$\label{sec:cusps-in R_+} } In this section, we describe the qualitative behavior of a convolution $\mu_{1}\boxtimes\mu_{2}$, where $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}$ are nondegenerate measures and $\mu_{2}$ is $\boxtimes$-infinitely divisible, subject to a mild additional condition. It was shown in \cite{Bi-cusp} how an analytic function argument provides examples in which the density of $\mu\boxplus\nu$, with $\nu$ a semicircle law, can have a cusp behavior at some points. More precisely, if $h$ is the density, then, at some of its zeros $t_{0}\in\mathbb{R}$, the ratio $h(t)/|t-t_{0}|^{1/3}$ is bounded away from zero and infinity. Then it is shown in \cite{Bi-cusp} that this is the worst possible cusp behavior that such a density can have. Arguments, similar to those in \cite{Bi-cusp} , show that the density of $\mu_{1}\boxtimes\mu_{2}$ can also be bounded by a cubic root near a zero if $\mu_{2}$ is the multiplicative analog of the semicircular law. Our purpose in this section is to show that this is the worst possible behavior for such densities if $\mu_{2}$ is an almost arbitrary $\boxtimes$-infinitely divisible measure. The argument proceeds in two steps. First, we work with the case in which $\mu_{2}$ is the multiplicative analog of a semicircular measure, thus producing a multiplicative analog of Proposition 4 and Corollary 5 in \cite{Bi-cusp}. For general $\mu_{2}$, we show that the density of $\mu_{1}\boxtimes\mu_{2}$ can be estimated using a different convolution $\nu_{1}\boxtimes\nu_{2}$, where $\nu_{2}$ is one of these multiplicative analogs of the semicircular measure, chosen with appropriate parameters. We recall an observation first made in \cite{Bi-cusp} in the free additive case. (The simple proof is provided for convenience as well as for establishing notation.) \begin{lem} \label{lem:omega is inf-div} Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}$ be such that $\mu_{2}$ is $\boxtimes$-infinitely divisible, and let $\rho_{1},\rho_{2}\in\mathcal{P}_{\mathbb{R}_{+}}$ be given by \textup{Theorem} \emph{\ref{thm:subordination on the line (mult)}}. Then $\rho_{1}$ is $\boxtimes$-infinitely divisible. \end{lem} \begin{proof} Let $\Phi$ given by (\ref{eq:extension of eta inverse (line)}) be the analytic continuation of $\eta_{\mu_{2}}^{\langle-1\rangle}$. Then (\ref{eq:defining boxtimes}) can be written as \[ \frac{\Phi(z)}{z}\eta_{\mu_{1}}^{-1}(z)=\eta_{\mu_{1}\boxtimes\mu_{2}}^{\langle-1\rangle}(z), \] and applying this equality with $\eta_{\mu_{1}}(z)$ in place of $z$, we obtain \[ \frac{\Phi(\eta_{\mu_{1}}(z))}{\eta_{\mu_{1}}(z)}z=\eta_{\mu_{1}\boxtimes\mu_{2}}^{\langle-1\rangle}(\eta_{\mu_{1}}(z))=\eta_{\rho_{1}}^{\langle-1\rangle}(z) \] for $z\in(\beta,0)$ and $\beta<0$. The lemma follows because the function \begin{equation} \Psi(z)=\frac{\Phi(\eta_{\mu_{1}}(z))}{\eta_{\mu_{1}}(z)}z,\quad z\in\mathbb{C}\backslash\mathbb{R}_{+},\label{eq:big psi line} \end{equation} is of the form $z\exp(v(z)),$ where \[ v(z)=\log\gamma+\int_{[0,+\infty]}\frac{1+t\eta_{\mu_{1}}(z)}{\eta_{\mu_{1}}(z)-t}\,d\sigma(t),\quad z\in\mathbb{C}\setminus\mathbb{R}_{+}, \] is an analytic function satisfying $v(\mathbb{H})\subset-\mathbb{H}$ (since $\eta_{\mu_{1}}(\mathbb{H})\subset\mathbb{H})$ and $v(\overline{z})=\overline{v(z)}$ for $z\in\mathbb{C}\setminus\mathbb{R}_{+}$. \end{proof} With the notation of the preceding lemma, we recall that the domain \[ \eta_{\rho_{1}}(\mathbb{H})=\Omega_{\rho_{1}}\cap\mathbb{H} \] can be described as \[ \eta_{\rho_{1}}(\mathbb{H})=\{re^{i\theta}:r>0,f(r)<\theta<\pi\} \] for some continuous function $f:(0,+\infty)\to[0,\pi)$, and that $\eta_{\rho_{1}}$ extends to a homeomorphism of $\overline{\mathbb{H}}$ onto $\overline{\eta_{\rho_{1}}(\mathbb{H})}$. It was shown in \cite{huang-wang} that $\eta_{\mu_{1}}$ extends continuously to $\overline{\eta_{\rho_{1}}(\mathbb{H})}$ provided that we allow $\infty$ as a possible value. Using, as before, the increasing homeomorphism \[ h(r)=\Psi(re^{if(r)}),\quad r\in(0,+\infty), \] the density $q_{\mu_{1}\boxtimes\mu_{2}}$ of $\mu_{1}\boxtimes\mu_{2}$, relative to the Haar measure $dx/x$ on $(0,+\infty)$, is calculated using the formula \begin{equation} q_{\mu_{1}\boxtimes\mu_{2}}(1/x)=\begin{cases} \frac{1}{\pi}\Im\frac{1}{1-\eta_{\mu_{1}}(re^{if(r)})}, & x=h(r)\text{ and }f(r)>0,\\ 0, & x=h(r)\text{ and }f(r)=0. \end{cases}\label{eq:density boxtimes line} \end{equation} The following proposition examines the density of $\mu_{1}\boxtimes\mu_{2}$ when $\mu_{2}$ is analogous to the semicircular measure, that is, when $\sigma$ is a point mass at $t=1$. (The equation (\ref{eq:density, semicircle xtimes}) regarding this density also appeared in \cite{Zhong}.) \begin{prop} \label{prop:boxtimes convo with semisemi} Suppose that $\beta,\gamma\in(0,+\infty)$, and that $\mu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}$ is such that \[ \gamma z\exp\left[\beta\frac{z+1}{z-1}\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+} \] is an analytic continuation of $\eta_{\mu_{2}}^{\langle-1\rangle}$. Let $q_{\mu_{1}\boxtimes\mu_{2}}$ be the density of $\mu_{1}\boxtimes\mu_{2}$ relative to the Haar measure $dx/x$ and define $k(x)=q_{\mu_{1}\boxtimes\mu_{2}}(1/x).$ Then\emph{:} \begin{enumerate} \item $\left|k'(x)\right|k(x)^{2}\le (4\pi^{3}\beta^{2}x)^{-1}$ for every $x\in(0,+\infty)$ such that $k(x)\ne0$. \item If $I\subset\mathbb{R}_{+}$ is an interval with one endpoint $x_{0}>0$, $k(x)>0$ for $x\in I$, and $k(x_{0})=0$, then \[ k(x)^{3}\le\frac{3}{4\pi^{3}\beta^{2}}|\log x-\log x_{0}|,\quad x\in I. \] In particular, $k(x)/|x-x_{0}|^{1/3}$ and $k(x)/|x^{-1}-x_{0}^{-1}|^{1/3}$ remain bounded for $x\in I$ close to $x_{0}$. \end{enumerate} \end{prop} \begin{proof} Part (2) follows from (1) because \[ k(x)^{3}=\left|\int_{x_{0}}^{x}3k(s)^{2}k'(s)\,ds\right|. \] By (\ref{eq:big psi line}), we have \begin{align*} \Psi(z) & =\gamma z\exp\left[\beta\frac{\eta_{\mu_{1}}(z)+1}{\eta_{\mu_{1}}(z)-1}\right]\\ & =\gamma z\exp\beta\left[1-\frac{2}{1-\eta_{\mu_{1}}(z)}\right]\\ & =e^{-\beta}\gamma z\exp\left[-2\beta\psi_{\mu_{1}}(z)\right]. \end{align*} The fact that $\arg\Psi\left(re^{if(r)}\right)=0$ leads to the identity \begin{equation} f(r)=2\beta\Im\frac{1}{1-\eta_{\mu_{1}}(re^{if(r)})}=2\pi\beta k(\Psi(re^{if(r)})).\label{eq:density, semicircle xtimes} \end{equation} We note for further use that \begin{equation} \Im\frac{1}{1-\eta_{\mu_{1}}(re^{if(r)})}=\Im(1+\psi_{\mu_{1}}(re^{if(r)}))=\int_{\mathbb{R}_{+}}\frac{tr\sin(f(r))}{|1-tre^{if(r)}|^{2}}\,d\mu_{1}(t).\label{eq:useful soon} \end{equation} Of course, our estimate applies to points $x=x(r)=\Psi(re^{if(r)})$ such that $f(r)>0$, and $f$ is continuously differentiable at such $r$. By the chain rule and \eqref{eq:density, semicircle xtimes}, \begin{equation} k'(x(r))=\frac{(d/dr)k(x(r))}{(d/dr)x(r)}=\frac{(1/2\pi\beta)f'(r)}{[x'(r)/x(r)]x(r)},\label{eq:chain rule} \end{equation} and thus we must find lower estimates for the logarithmic derivative $x'(r)/x(r)$. We have \begin{align*} \left|\frac{(d/dr)\Psi(re^{if(r)})}{\Psi(re^{if(r)})}\right| & =\left|\frac{\Psi'(re^{if(r)})}{\Psi(re^{if(r)})}\right|\left|\frac{d(re^{if(r)})}{dr}\right|\\ & =\left|\frac{1}{re^{if(r)}}-2\beta\psi'_{\mu_{1}}(re^{if(r)})\right|\left|e^{if(r)}(1+irf'(r))\right|\\ & =\frac{1}{r}\left|1-2\beta re^{if(r)}\psi'_{\mu_{1}}(re^{if(r)})\right|\sqrt{1+r^{2}f'(r)^{2}}. \end{align*} Observe that \[ \psi_{\mu_{1}}'(re^{if(r)})=\int_{\mathbb{R}_{+}}\frac{t}{(1-tre^{if(r)})^{2}}\,d\mu_{1}(t), \] and use relations (\ref{eq:density, semicircle xtimes}) and (\ref{eq:useful soon}) to see that \begin{align*} 1-2\beta re^{if(r)}\psi'_{\mu_{1}}(re^{if(r)}) & =\frac{2\beta}{f(r)}\Im\frac{1}{1-\eta_{\mu_{1}}(re^{if(r)})}-2\beta re^{if(r)}\psi'_{\mu_{1}}(re^{if(r)})\\ & =2\beta\int_{\mathbb{R}_{+}}\left[\frac{1}{f(r)}\frac{tr\sin(f(r))}{|1-tre^{if(r)}|^{2}}-\frac{tre^{if(r)}}{(1-tre^{if(r)})^{2}}\right]\,d\mu_{1}(t). \end{align*} We now calculate \begin{align*} \Re & \left[\frac{1}{f(r)}\frac{tr\sin(f(r))}{|1-tre^{if(r)}|^{2}}-\frac{tre^{if(r)}}{(1-tre^{if(r)})^{2}}\right]\\ & =tr\frac{\sin(f(r))|1-tre^{if(r)}|^{2}-f(r)\Re\left[tre^{if(r)}(1-tre^{-if(r)})^{2}\right]}{f(r)|1-tre^{if(r)}|^{4}}\\ & =tr\frac{(1+t^{2}r^{2})\left[\sin(f(r))-f(r)\cos(f(r))\right]+tr\left[2f(r)-\sin(2f(r))\right]}{f(r)|1-tre^{if(r)}|^{4}}\\ & \ge\frac{t^{2}r^{2}\left[2f(r)-\sin(2f(r))\right]}{f(r)|1-tre^{if(r)}|^{4}}, \end{align*} where we used the fact that $\sin f-f\cos f\ge0$ for $f\in(0,\pi).$ Thus, \begin{align*} |1-2\beta re^{if(r)}\psi'_{\mu_{1}}(re^{if(r)})| & \ge2\beta\int_{\mathbb{R}_{+}}\Re\left[\frac{1}{f(r)}\frac{tr\sin(f(r))}{|1-tre^{if(r)}|^{2}}-\frac{tre^{if(r)}}{(1-tre^{if(r)})^{2}}\right]\,d\mu_{1}(t).\\ & \ge2\beta\frac{2f(r)-\sin(2f(r))}{f(r)}\int_{\mathbb{R}_{+}}\frac{t^{2}r^{2}}{|1-tre^{if(r)}|^{4}}\,d\mu_{1}(t)\\ \text{(Schwarz inequality)} & \ge2\beta\frac{2f(r)-\sin(2f(r))}{f(r)}\left[\int_{\mathbb{R}_{+}}\frac{tr}{|1-tre^{if(r)}|^{2}}\,d\mu_{1}(t)\right]^{2}\\ \text{(by (\ref{eq:density, semicircle xtimes}) and (\ref{eq:useful soon})) } & =2\beta\frac{2f(r)-\sin(2f(r))}{f(r)}\left[\frac{f(r)}{2\beta\sin(f(r))}\right]^{2}\\ & =\frac{2f(r)-\sin(2f(r))}{f(r)\sin^{2}(f(r))}\left[\frac{f(r)^{2}}{2\beta}\right]. \end{align*} A further lower bound is obtained using the inequality $2f-\sin(2f)\ge f\sin^{2}f$, valid for $f\in(0,\pi)$. We obtain \begin{align*} \left|\frac{x'(r)}{x(r)}\right| & =\left|\frac{(d/dr)\Psi(re^{if(r)})}{\Psi(re^{if(r)})}\right|\\ & \ge\frac{f(r)^{2}}{2\beta}\frac{\sqrt{1+r^{2}f'(r)^{2}}}{r}, \end{align*} and finally from (\ref{eq:chain rule}), \begin{align*} |k'(x(r))| & =\left|\frac{(1/2\pi\beta)f'(r)}{(x'(r)/x(r))x(r)}\right|\\ & \le\frac{1}{\pi f(r)^{2}x(r)}\frac{r|f'(r)|}{\sqrt{1+r^{2}f'(r)^{2}}}\\ & \le\frac{1}{\pi f(r)^{2}x(r)}. \end{align*} By (\ref{eq:density, semicircle xtimes}), this is precisely the inequality in (1). \end{proof} \begin{rem} With the notation of the preceding lemma, it is easy to verify that the inequality in (1) is equivalent to \[ \left|q'_{\mu_{1}\boxtimes\mu_{2}}(x)\right|q_{\mu_{1}\boxtimes\mu_{2}}(x)^{2}\le \frac{1}{4\pi^{3}\beta^{2}x},\quad \text{where} \quad x\in (0,+\infty),\,\,q_{\mu_{1}\boxtimes\mu_{2}}(x)\ne0. \] \end{rem} One essential observation that allows us to extend the preceding result to more general $\boxtimes$-infinitely divisible measures $\mu_{2}$ is as follows. The density of $\mu_{1}\boxtimes\mu_{2}$ depends largely, via (\ref{eq:density boxtimes line}) on the function $f$, and thus on the $\boxtimes$-infinitely divisible measure $\rho_{1}$. In many cases, it is possible to find another convolution $\nu_{1}\boxtimes\nu_{2}$, such that $\eta_{\nu_{1}\boxtimes\nu_{2}}=\eta_{\nu_{1}}\circ\eta_{\rho_{1}}$ (with the same measure $\rho_{1}),$ and such that $\nu_{2}$ is a multiplicative analog of the semicircular measure. The verification of the following result is a simple calculation. The details are left to the reader. Note that the existence of the measure $\nu_{1}$ below follows from Lemmas 2.1 and 2.3. \begin{lem} \label{lem:trade a free convolution for another}Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}$ be such that $\mu_{2}$ is $\boxtimes$-infinitely divisible, and let \[ \Phi(z)=\gamma z\exp\left[\int_{[0,+\infty]}\frac{1+tz}{z-t}\,d\sigma(t)\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] be an analytic continuation of $\eta_{\mu_{2}}^{\langle-1\rangle}$. Denote by $\rho_{1}\in\mathcal{P}_{\mathbb{R}_{+}}$ the $\boxtimes$-infinitely divisible measure such that $\eta_{\rho_{1}}^{\langle-1\rangle}$ has the analytic continuation \[ \Psi(z)=\gamma z\exp\left[\int_{[0,+\infty]}\frac{1+t\eta_{\mu_{1}}(z)}{\eta_{\mu_{1}}(z)-t}\,d\sigma(t)\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}. \] Suppose that \[ \beta=\frac{1}{2}\int_{[0,+\infty]}\left(\frac{1}{t}+t\right)\,d\sigma(1/t) \] is finite and nonzero. Denote by $\nu_{1}\in\mathcal{P}_{\mathbb{R}_{+}}$ the measure satisfying \[ \psi_{\nu_{1}}(z)=\frac{1}{2\beta}\int_{[0,+\infty]}\frac{t\eta_{\mu_{1}}(z)}{1-t\eta_{\mu_{1}}(z)}\,\left(\frac{1}{t}+t\right)\,d\sigma(1/t),\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] and denote by $\nu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}$ the $\boxtimes$-infinitely divisible measure such that $\eta_{\nu_{2}}^{\langle-1\rangle}$ has the analytic continuation \[ \gamma^{\prime}z\exp\left[\beta\frac{z+1}{z-1}\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] where \[ \gamma^{\prime}=\gamma\exp\left[\frac{1}{2}\int_{[0,+\infty]}\left(\frac{1}{t}-t\right)\,d\sigma(1/t)\right]. \] Then $\eta_{\mu_{1}\boxtimes\mu_{2}}=\eta_{\mu_{1}}\circ\eta_{\rho_{1}}$ and $\eta_{\nu_{1}\boxtimes\nu_{2}}=\eta_{\nu_{1}}\circ\eta_{\rho_{1}}$. \end{lem} For the final proof in this section, we need some results from \cite{huang-wang}, which we formulate using the notation established in Lemma \ref{lem:omega is inf-div}. According to \cite[Theorem 4.16]{huang-wang}, the zero set $\{\alpha\in(0,+\infty):f(\alpha)=0\}$ can be partitioned into three sets $A,B,C$ defined as follows. \begin{enumerate} \item The set $A$ consists of those $\alpha\in(0,+\infty)$ such that $\mu_{1}(\{1/\alpha\})>0$ and \[ \int_{[0,+\infty]}\frac{1+t^{2}}{(1-t)^{2}}\,d\sigma(t)\le\mu_{1}(\{1/\alpha\}). \] \item The set $B$ consists of those $\alpha\in(0,+\infty)$ for which $\eta_{\mu_{1}}(\alpha)\in\mathbb{R}\backslash\{1\}$ and \[ \left[\int_{\mathbb{R}_{+}}\frac{\alpha t}{(1-\alpha t)^{2}}\,d\mu_{1}(t)\right]\left[\int_{[0,+\infty]}\frac{1+t^{2}}{(\eta_{\mu_{1}}(\alpha)-t)^{2}}\,d\sigma(t)\right]\le\frac{1}{(1-\eta_{\mu_{1}}(\alpha))^{2}}. \] \item Finally, $\alpha\in C$ provided that $\eta_{\mu_{1}}(\alpha)=\infty$ and \[ \left[\int_{\mathbb{R}_{+}}\frac{d\mu_{1}(t)}{(1-\alpha t)^{2}}\right]\left[\int_{[0,+\infty]}(1+t^{2})\,d\sigma(t)\right]\le1. \] \end{enumerate} The proof of this result relies on Proposition 4.10 of \cite{B-B-IMRN} which states that $f(\alpha)=0$ if and only if the map $\Psi$ has a finite Julia-Carath\'{e}odory derivative at the point $\alpha$, so that the preceding inequalities are a consequence of the chain rule for Julia-Carath\'{e}odory derivative. (See \cite{Shapiro} for the basics of Julia-Carath\'{e}odory derivative.) The density of $\mu_{1}\boxtimes\mu_{2}$ is continuous everywhere, except on the finite set \[ \{1/\Psi(\alpha):\alpha\in A\}. \] If $x\in(0,+\infty)$ is an atom of $\mu_{1}\boxtimes\mu_{2}$ then $\eta_{\rho_{1}}(1/x)\in A$. \begin{thm} \label{thm:cusp on R+}Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}_{+}}$ be two nondegenerate measures such that $\mu_{2}$ is $\boxtimes$-infinitely divisible, and let \[ \Phi(z)=\gamma z\exp\left[\int_{[0,+\infty]}\frac{1+tz}{z-t}\,d\sigma(t)\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] be an analytic continuation of $\eta_{\mu_{2}}^{\langle-1\rangle}$. Suppose that $\sigma(0,+\infty)>0$. If $I\subset(0,+\infty)$ is an open interval with an endpoint $x_{0}>0$ such that $1/\eta_{\rho_{1}}(1/x_{0})$ is not an atom of $\mu_{1}$, and $q_{\mu_{1}\boxtimes\mu_{2}}(x_{0})=0<q_{\mu_{1}\boxtimes\mu_{2}}(x)$ for every $x\in I$, then $q_{\mu_{1}\boxtimes\mu_{2}}(x)/|x-x_{0}|^{1/3}$ is bounded for $x\in I$ close to $x_{0}$. \end{thm} \begin{proof} We can always find finite measures $\sigma'$ and $\sigma''$ on $[0,+\infty]$ such that $\sigma=\sigma'+\sigma'',$ $\sigma''\ne0$ and $\sigma''$ has compact support contained in $(0,+\infty)$. The $\boxtimes$-infinitely divisible measures $\mu'_{2},\mu_{2}''\in\mathcal{P}_{\mathbb{R_{+}}}$, defined by the fact that $\eta_{\mu_{2}'}^{\langle-1\rangle}$ and $\eta_{\mu_{2}^{\prime\prime}}^{\langle-1\rangle}$ have analytic continuations \[ \gamma z\exp\left[\int_{[0,+\infty]}\frac{1+tz}{z-t}\,d\sigma'(t)\right]\text{ and }z\exp\left[\int_{[0,+\infty]}\frac{1+tz}{z-t}\,d\sigma''(t)\right],\quad z\in\mathbb{C}\backslash\mathbb{R}_{+}, \] respectively, satisfy the relation $\mu_{2}'\boxtimes\mu_{2}''=\mu_{2}$, and thus $\mu_{1}\boxtimes\mu_{2}=\mu_{1}''\boxtimes\mu_{2}''$, where $\mu_{1}''=\mu_{1}\boxtimes\mu_{2}'$. There exist additional $\boxtimes$-infinitely divisible measures $\rho_{1}',\rho_{1}''\in\mathcal{P}_{\mathbb{R_{+}}}$ such that $\eta_{\mu_{1}''}=\eta_{\mu_{1}}\circ\eta_{\rho_{1}'}$ and $\eta_{\mu_{1}\boxtimes\mu_{2}}=\eta_{\mu_{1}''}\circ\eta_{\rho_{1}''}$. Clearly, $\eta_{\rho_{1}}=\eta_{\rho_{1}'}\circ\eta_{\rho_{1}''}$, and we argue that $1/\eta_{\rho_{1}''}(1/x_{0})$ is a real number but not an atom of $\mu_{1}''$. Indeed, letting $z\rightarrow1/x_{0}$ in the inequality \[ \arg\eta_{\rho_{1}}(z)=\arg(\eta_{\rho_{1}'}(\eta_{\rho_{1}''}(z))\ge\arg\eta_{\rho_{1}''}(z),\quad z\in\mathbb{H}, \] the hypothesis $\eta_{\rho_{1}}(1/x_{0})\in(0,+\infty)$ implies that $\eta_{\rho_{1}''}(1/x_{0})\in(0,+\infty)$. Suppose, to get a contradiction, that $1/\eta_{\rho_{1}''}(1/x_{0})$ is an atom of $\mu_{1}''$. Then, as seen in \cite{bel-atomX}, \[ 1/\eta_{\rho_{1}}(1/x_{0})=1/\eta_{\rho_{1}'}(\eta_{\rho_{1}''}(1/x_{0})) \] is necessarily an atom of $\mu_{1}$, contrary to the hypothesis. The above construction shows that the hypothesis of the theorem also holds with $\mu_{1}'',\mu_{2}''$, and $\rho_{1}''$ in place of $\mu_{1},\mu_{2},$ and $\rho_{1}$, respectively. Moreover, it is obvious that $\int_{[0,+\infty]}((t^{2}+1)/t)\,d\sigma''(t)<+\infty$. Therefore we may, and do, assume that the additional hypothesis $\int_{[0,+\infty]}((t^{2}+1)/t)\,d\sigma(t)<+\infty$ is satisfied. In particular, the hypothesis of Lemma \ref{lem:trade a free convolution for another} is satisfied. With the notation of that lemma, Proposition \ref{prop:boxtimes convo with semisemi} shows that it suffices to prove that $q_{v_{1}\boxtimes\nu_{2}}(x)/q_{\mu_{1}\boxtimes\mu_{2}}(x)$ is bounded away from zero for $x\in I$ close to $x_{0}$. For this purpose, we write points $x\in(0,+\infty)$ as $x=1/\Psi(re^{if(r)})$. In particular, $x_{0}=1/\Psi(r_{0}e^{if(r_{0})})$ and $f(r_{0})=0$. The fact that $1/\eta_{\rho_{1}}(1/x_{0})$ is not an atom for $\mu_{1}$ implies that $r_{0}\in B\cup C$. The formula (\ref{eq:density boxtimes line}) and the definition of $\nu_{1}$ yield \begin{align*} q_{\nu_{1}\boxtimes\nu_{2}}(x) & =\frac{1}{\pi}\Im\frac{1}{1-\eta_{\nu_{1}}(re^{if(r)})}=\frac{1}{\pi}\Im\psi_{\nu_{1}}(re^{if(r)})\\ & =\frac{1}{2\pi\beta}\Im\left[\int_{[0,+\infty]}\frac{t\eta_{\mu_{1}}(re^{if(r)})}{1-t\eta_{\mu_{1}}(re^{if(r)})}\left(t+\frac{1}{t}\right)\,d\sigma(1/t)\right]\\ & =\frac{\Im\eta_{\mu_{1}}(re^{if(r)})}{2\pi\beta}\int_{[0,+\infty]}\frac{1+t^{2}}{|t-\eta_{\mu_{1}}(re^{if(r)})|^{2}}\,d\sigma(t). \end{align*} Since we also have \[ q_{\mu_{1}\boxtimes\mu_{2}}(x)=\frac{1}{\pi}\Im\frac{1}{1-\eta_{\mu_{1}}(re^{if(r)})}=\frac{1}{\pi}\frac{\Im\eta_{\mu_{1}}(re^{if(r)})}{|1-\eta_{\mu_{1}}(re^{if(r)})|^{2}}, \] we deduce that \begin{equation} \frac{q_{\nu_{1}\boxtimes\nu_{2}}(x)}{q_{\mu_{1}\boxtimes\mu_{2}}(x)}=\frac{|1-\eta_{\mu_{1}}(re^{if(r)})|^{2}}{2\beta}\int_{[0,+\infty]}\frac{1+t^{2}}{|t-\eta_{\mu_{1}}(re^{if(r)})|^{2}}\,d\sigma(t).\label{eq:use in remark} \end{equation} Letting $x\to x_{0}$, so $r\to r_{0}$, we see that \[ \liminf_{x\to x_{0},x\in I}\frac{q_{\nu_{1}\boxtimes\nu_{2}}(x)}{q_{\mu_{1}\boxtimes\mu_{2}}(x)}\ge\frac{|1-\eta_{\mu_{1}}(r_{0})|^{2}}{2\beta}\int_{[0,+\infty]}\frac{1+t^{2}}{|t-\eta_{\mu_{1}}(r_{0})|^{2}}\,d\sigma(t) \] if $r_{0}\in B$, and \[ \liminf_{x\to x_{0},x\in I}\frac{q_{\nu_{1}\boxtimes\nu_{2}}(x)}{q_{\mu_{1}\boxtimes\mu_{2}}(x)}\ge\frac{1}{2\beta}\int_{[0,+\infty]}(1+t^{2})\,d\sigma(t) \] if $r_{0}\in C$. In either case, the lower estimate is strictly positive. \end{proof} \begin{rem} \label{rem:universality, reverse inequality} In the above proof, we show that $q_{\mu_{1}\boxtimes\mu_{2}}(x)=O(q_{\nu_{1}\boxtimes\nu_{2}}(x))$ as $x\to x_{0},x\in I$. It is also true that $q_{\nu_{1}\boxtimes\nu_{2}}(x)=O(q_{\mu_{1}\boxtimes\mu_{2}}(x))$ as $x\to x_{0},x\in I$. To see this, we observe that the definition of $f$ implies the equality \[ f(r)=\Im\eta_{\mu_{1}}(re^{if(r)})\int_{[0,+\infty]}\frac{1+t^{2}}{|t-\eta_{\mu_{1}}(re^{if(r)})|^{2}}\,d\sigma(t). \] Thus, the reciprocal of the fraction in (\ref{eq:use in remark}) can be rewritten as \begin{align*} \frac{q_{\mu_{1}\boxtimes\mu_{2}}(x)}{q_{\nu_{1}\boxtimes\nu_{2}}(x)} & =\frac{2\beta}{|1-\eta_{\mu_{1}}(re^{if(r)})|^{2}}\frac{\Im\eta_{\mu_{1}}(re^{if(r)})}{f(r)}\\ & =\frac{2\beta r\sin(f(r))}{f(r)}\frac{\Im\eta_{\mu_{1}}(re^{if(r)})}{\Im(re^{if(r)})|1-\eta_{\mu_{1}}(re^{if(r)})|^{2}}\\ & =\frac{2\beta r\sin(f(r))}{f(r)}\frac{\Im\psi_{\mu_{1}}(re^{if(r)})}{\Im(re^{if(r)})}\\ & =\frac{2\beta r\sin(f(r))}{f(r)}\int_{\mathbb{R}_{+}}\frac{t}{\left|1-tre^{if(r)}\right|^{2}}\,d\mu_{1}(t). \end{align*} Letting $x\to x_{0}$ yields \[ \liminf_{x\to x_{0},x\in I}\frac{q_{\mu_{1}\boxtimes\mu_{2}}(x)}{q_{\nu_{1}\boxtimes\nu_{2}}(x)}\ge2\beta\int_{\mathbb{R}_{+}}\frac{tr_{0}}{(1-tr_{0})^{2}}\,d\mu_{1}(t)>0. \] Note that the quantity on the right hand side is in fact finite. This is immediate if $r_{0}\in B$, and it follows from the identity \[ \int_{\mathbb{R}_{+}}\frac{tr_{0}}{(1-tr_{0})^{2}}\,d\mu_{1}(t)=\int_{\mathbb{R}_{+}}\frac{1}{(1-tr_{0})^{2}}\,d\mu_{1}(t)-\int_{\mathbb{R}_{+}}\frac{1}{1-tr_{0}}\,d\mu_{1}(t) \] if $r_{0}\in C$. \begin{rem} There are cases, other than those of Proposition \ref{prop:boxtimes convo with semisemi}, in which the set $\{x_{0}>0:\eta_{\rho_{1}}(1/x_{0})\in A\}$ is empty, and thus the conclusion of Theorem \ref{thm:cusp on R+} holds at every zero of $q_{\mu_1\boxtimes \mu_{2}}$; in particular, the result holds at cusps and at the edge of every connected component of the set $\{x:p_{\mu_{1}\boxtimes\mu_{2}}(x)>0\}$. See Remark \ref{rem:no exceptions} for a brief discussion in the context of additive free convolution. \end{rem} \end{rem} \section{Free multiplicative convolution on $\mathbb{T}$\label{sec:Free-mutiplicative-convolution on T}} We denote by $\mathcal{P}_{\mathbb{T}}$ the collection of probability measures on $\mathbb{T}$. The definition of the moment generating function for a measure $\mu\in\mathcal{P}_{\mathbb{T}}$ is analogous to the one used for $\mathcal{P}_{\mathbb{R}_{+}}$, but the domain is now the unit disk $\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}$: \[ \psi_{\mu}(z)=\int_{\mathbb{T}}\frac{tz}{1-tz}\,d\mu(t),\quad z\in\mathbb{D}. \] The $\eta$-transform of $\mu$ is the function \[ \eta_{\mu}(z)=\frac{\psi_{\mu}(z)}{1+\psi_{\mu}(z)},\quad z\in\mathbb{D}. \] The collection $\{\eta_{\mu}:\mu\in\mathcal{P}_{\mathbb{T}}\}$ is simply the set of all analytic functions $f:\mathbb{D}\to\mathbb{D}$ that satisfy $f(0)=0$. If we denote by \[ H_{\mu}(z)=\int_{\mathbb{T}}\frac{t+z}{t-z}\,d\mu(t),\quad z\in\mathbb{D}, \] the Herglotz integral of $\mu$, and if we define $\mu_{*}\in\mathcal{P}_{\mathbb{T}}$ by $d\mu_{*}(t)=d\mu(1/t)$, then \[ H_{\mu_{*}}(z)=1+2\psi_{\mu}(z)=\frac{1+\eta_{\mu}(z)}{1-\eta_{\mu}(z)},\quad z\in\mathbb{D}. \] Since $\Re H_{\mu_{*}}$ is the Poisson integral of $\mu_{*}$, we deduce that the measures \[ \frac{1}{2\pi}\Re\frac{1+\eta_{\mu}(re^{-i\theta})}{1-\eta_{\mu}(re^{-i\theta})}\,d\theta,\quad \theta\in[0,2\pi),\,r\in(0,1), \] converge weakly to $d\mu(e^{i\theta})$ as $r\uparrow1$. In particular, the density of $\mu$ relative to arclength measure $d\theta$ on $\mathbb{T}$ is given almost everywhere by \begin{equation} p_{\mu}(\xi)=\frac{1}{2\pi}\Re\frac{1+\eta_{\mu}(\overline{\xi})}{1-\eta_{\mu}(\overline{\xi})},\quad \xi\in\mathbb{T},\label{eq:density from eta, circle} \end{equation} where \[ \eta_{\mu}(\xi)=\lim_{r\uparrow1}\eta_{\mu}(r\xi),\quad \xi\in\mathbb{T}, \] exists almost everywhere as shown by Fatou \cite{Fatou}. In many cases of interest, the function $\eta_{\mu}$ extends continuously to $\mathbb{T}$, and thus $\mu$ is absolutely continuous on the set $\{\xi\in\mathbb{T}:\eta_{\mu}(\overline{\xi})\ne1\}$. The $\eta$-transform is used in the description of free multiplicative convolution on the subset $\mathcal{P}_{\mathbb{T}}^{*}$ of $\mathcal{P}_{\mathbb{T}}$, consisting of those measures $\mu$ with the property that $\int_{\mathbb{T}}t\,d\mu(t)\ne0$. If $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$, we have $\eta_{\mu}'(0)=\int_{\mathbb{T}}t\,d\mu(t)\ne0$, and thus $\eta_{\mu}$ has an inverse $\eta_{\mu}^{\langle-1\rangle}$ that is a convergent power series in a neighborhood of zero. The free multiplicative convolution of two measures $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ is characterized by the identity (\ref{eq:defining boxtimes}) that is now true in some neighborhood of zero. The following theorem is a reformulation of the analytic subordination from \cite{Bi-free inc}. \begin{thm} \label{thm:subordination by Bi}For every $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$, there exist unique $\rho_{1},\rho_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ such that \[ z\eta_{\mu_{1}}(\eta_{\rho_{1}}(z))=z\eta_{\mu_{2}}(\eta_{\rho_{2}}(z))=\eta_{\rho_{1}}(z)\eta_{\rho_{2}}(z),\quad z\in\mathbb{D}. \] Moreover, we have $\eta_{\mu_{1}\boxtimes\mu_{2}}=\eta_{\mu_{1}}\circ\eta_{\rho_{1}}$. If $\mu_{1},\mu_{2}$ are nondegenerate, then so are $\rho_{1},\rho_{2}$. \end{thm} The concept of $\boxtimes$-infinite divisibility for measures in $\mathcal{P}_{\mathbb{T}}$ is defined as for $\mathcal{P}_{\mathbb{R}_{+}}$. The normalized arclength measure $m=d\theta/2\pi$ is the only $\boxtimes$-infinitely divisible measure in $\mathcal{P}_{\mathbb{T}}\backslash\mathcal{P}_{\mathbb{T}}^{*}$. All other $\boxtimes$-infinitely divisible measures are described by results of \cite{vo-mul,B-V-levy}. Suppose that $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$ is $\boxtimes$-infinitely divisible. Then the function $\eta_{\mu}^{\langle-1\rangle}$ has an analytic continuation $\Phi$ to $\mathbb{D}$ satisfying \begin{equation} \Phi(0)=0,\;|\Phi(z)|\ge|z|,\quad z\in\mathbb{D}.\label{eq:Phi on D} \end{equation} Conversely, every analytic function $\Phi:\mathbb{D}\to\mathbb{C}$ that satisfies (\ref{eq:Phi on D}) is the analytic continuation of $\eta_{\mu}^{\langle-1\rangle}$ for some $\boxtimes$-infinitely divisible measure $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$. Of course, the identity \[ \Phi(\eta_{\mu}(z))=z \] extends by analytic continuation to arbitrary $z\in\mathbb{D},$ and thus $\eta_{\mu}$ is a conformal map if $\mu$ is $\boxtimes$-infinitely divisible. Some further information about this case is summarized below (see \cite{B-B-IMRN}). \begin{prop} \label{prop:starlike stuff} Let $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$ be $\boxtimes$-infinitely divisible, and let $\Phi:\mathbb{D}\to\mathbb{C}$ be the analytic continuation of $\eta_{\mu}^{\langle-1\rangle}.$ Then\emph{:} \begin{enumerate} \item The domain $\Omega_{\mu}=\eta_{\mu}(\mathbb{D})$ is starlike relative to the origin. \item The function $\eta_{\mu}$ extends to a homeomorphism of $\overline{\mathbb{D}}$ onto $\overline{\Omega_{\mu}}.$ \item We have $\Omega_{\mu}=\{z\in\mathbb{D}:|\Phi(z)|<1\}$. \item If $|\eta_{\mu}(t)|<1$ for some $t\in\mathbb{T}$ then $\eta_{\mu}$ continues analytically to a neighborhood of $t$. \end{enumerate} \end{prop} The functions $\Phi$ that satisfy (\ref{eq:Phi on D}) can be written as \begin{equation} \Phi(z)=\gamma z\exp H_{\sigma}(z),\quad z\in\mathbb{D},\label{eq:Levi-Hincin T} \end{equation} where $\gamma\in\mathbb{T}$ and $\sigma$ is a finite, positive Borel measure on $\mathbb{T}$. The parameters $(\gamma,\sigma)$ are uniquely determined by $\Phi$ (or by $\mu$) and (\ref{eq:Levi-Hincin T}) is an analog of the L\'evy-Hin\v cin formula in classical probability. (Recall that $H_{\sigma}$ denotes the Herglotz integral of $\sigma$.) This representation of $\Phi$, along with part (3) of the above lemma, allow us to give an alternative description of $\eta_{\mu}|\mathbb{T}$. We have \[ |\Phi(r\zeta)|=r\exp\Re H_{\sigma}(r\zeta)=r\exp\left[\int_{\mathbb{T}}\frac{1-r^{2}}{|t-r\zeta|^{2}}\,d\sigma(t)\right],\quad r\in(0,1),\zeta\in\mathbb{T}, \] and thus \[ \log|\Phi(r\zeta)|=[1-T(r\zeta)]\log r, \] where \[ T(r\zeta)=\frac{r^{2}-1}{\log r}\int_{\mathbb{T}}\frac{d\sigma(t)}{|t-r\zeta|^{2}},\quad r\in(0,1),\zeta\in\mathbb{T}. \] The map $T(r\zeta)$ is an increasing, continuous function of $r$ for fixed $\zeta$ (see \cite[Lemma 3.1]{Zhong}). We also set \[ T(\zeta)=\lim_{r\uparrow1}T(r\zeta)=2\int_{\mathbb{T}}\frac{d\sigma(t)}{|t-\zeta|^{2}}. \] We conclude that $r\zeta\in\Omega_{\mu}$ precisely when $T(r\zeta)<1.$ Since $\Omega_{\mu}$ is starlike relative to $0$, we conclude that, for each fixed $\zeta\in\mathbb{T}$, the set \[ \{r\in(0,1):T(r\zeta)<1\} \] is an interval $(0,R(\zeta))$. We summarize some of the properties of the function $R$ below. \begin{lem} \label{lem:rho is cont}\cite{huang-zhong, huang-wang, Zhong} Suppose that $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$ is $\boxtimes$-infinitely divisible. With the notation introduced above, we have\emph{:} \begin{enumerate} \item The function $R$ is continuous and $R(e^{i\theta})$ is continuously differentiable on $\{\theta\in \mathbb{R}:R(e^{i\theta})<1\}$. \item $\Omega_{\mu}=\{r\zeta:\zeta\in\mathbb{T},0\le r<R(\zeta)\}$ and $\partial\Omega_{\mu}=\{R(\zeta)\zeta:\zeta\in\mathbb{T}\}$. \item $R(\zeta)<1$ if and only if $T(\zeta)>1$, in which case $T(R(\zeta)\zeta)=1$. The inequality $T(R(\zeta)\zeta)\le1$ holds for every $\zeta\in\mathbb{T}$. \end{enumerate} \end{lem} The following result is analogous to Lemma \ref{lem:continuity of u (half line)}. A similar estimate could be derived from \cite[(4.20)]{B-B-IMRN}. \begin{lem} \label{lem:equicontinuity circle} Suppose that $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$ is $\boxtimes$-infinitely divisible. With the notation introduced above, we have \[ |dH_{\sigma}/dz|\le8\sigma(\mathbb{T})+2,\quad z\in\Omega_{\mu}. \] \end{lem} \begin{proof} Direct calculation yields \begin{align*} |dH_{\sigma}/dz| & =\left|\int_{\mathbb{T}}\frac{2t}{(t-z)^{2}}\,d\sigma(t)\right|\le2\int_{\mathbb{T}}\frac{d\sigma(t)}{|t-z|^{2}}. \end{align*} Since $T(z)\le1$ for $z\in\Omega_{\mu}$, we have \[ \int_{\mathbb{T}}\frac{d\sigma(t)}{|t-z|^{2}}\le\frac{\log|z_{j}|}{|z|^{2}-1}<\frac{1}{2|z|}\leq1, \] if $|z|\ge1/2$. If $|z|<1/2$, we have $|t-z|\ge1/2$ for $t\in\mathbb{T}$, and the estimate \[ \int_{\mathbb{T}}\frac{d\sigma(t)}{|t-z|^{2}}\le4\sigma(\mathbb{T}) \] yields the desired result. \end{proof} The discussion of convolution powers in $\mathcal{P}_{\mathbb{T}}^{*}$ is best carried out for real exponents rather than just integer ones. We review this construction from \cite{B-B-IMRN} as follows. Suppose that $\nu\in\mathcal{P}_{\mathbb{T}}^{*}$ satisfies $\int_{\mathbb{T}}t\,d\nu(t)>0$ and $\eta_{\nu}$ has no zeros in $\mathbb{D}\backslash\{0\}$. Fix $k\in(1,+\infty)$ and set \[ \Phi(z)=z\left(\frac{z}{\eta_{\nu}(z)}\right)^{k-1},\quad z\in\mathbb{D}. \] We have \[ \eta'_{\nu}(0)=\int_{\mathbb{T}}t\,d\nu(t)>0, \] and the power above is chosen such that $\Phi'(0)>0$. The Schwarz lemma shows that $|\Phi(z)|\ge|z|$ for $z\in\mathbb{D},$ and therefore there exists a $\boxtimes$-infinitely divisible measure $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$ such that $\Phi$ is an analytic continuation of $\eta_{\mu}^{\langle-1\rangle}$. We can then \emph{define} the convolution power $\nu^{\boxtimes k}$ by setting \[ \eta_{\nu^{\boxtimes k}}=\eta_{\nu}\circ\eta_{\mu}. \] If $k$ is an integer, the measure $\nu^{\boxtimes k}$ is in fact equal to the free multiplicative convolution of $k$ copies of $\nu$. The analog of (\ref{eq:eta of subordination, halflin}) also holds in this context, but it must be written so that the powers make sense: \[ \left(\frac{\eta_{\mu}(z)}{z}\right)^{k}=\left(\frac{\eta_{\nu^{\boxtimes k}}(z)}{z}\right)^{k-1},\quad z\in\mathbb{D}; \] equivalently, \[ \eta_{\nu^{\boxtimes k}}(z)=\eta_{\mu}(z)\left(\frac{\eta_{\mu}(z)}{z}\right)^{1/(k-1)},\quad z\in\mathbb{D}. \] As in the real case, the function $\eta_{\nu^{\boxtimes k}}$ extends continuously to the closure $\overline{\mathbb{D}}$ \cite{B-B-IMRN}. This construction of real powers fails if $\eta_{\nu}(z)=0$ for some $z\in\mathbb{D}\backslash\{0\}$. Suppose however that $\int_{\mathbb{T}}t\,d\nu(t)>0$. The $\eta$-transform of the measure $\nu^{\boxtimes2}=\nu\boxtimes\nu$ has no zeros other than $0$, and therefore one can define \[ \nu^{\boxtimes k}=(\nu\boxtimes\nu)^{k/2} \] provided that $k>2$. These considerations can be carried out for arbitrary measures in $\mathcal{P}_{\mathbb{T}}^{*}$ by choosing an arbitrary determination of the power $(z/\eta_{\nu}(z))^{k-1}$. If $k$ is not an integer, there may be infinitely many versions of $\nu^{\boxtimes k}$, but each of them can be obtained from the others by appropriate rotations. \section{Superconvergence in $\mathcal{P}_{\mathbb{T}}$\label{sec:Superconvergence-in T}} The weak convergence of $\boxtimes$-infinitely divisible measures is equivalent to certain convergence properties of the $\eta$-transforms and of their inverses. We record the result from \cite[Proposition 2.9]{B-V-levy}. The equivalence between (1) and (5) below is implicit in the proof of Theorem 4.3 from \cite{BW-mult-laws}. \begin{prop} \cite{B-V-levy,BW-mult-laws} Suppose that $\mu$ and $\{\mu_{n}\}_{n\in\mathbb{N}}$ are nondegenerate $\boxtimes$-infinitely divisible measures in $\mathcal{P}_{\mathbb{T}}^{*}.$ Denote by $\Phi$ and $\{\Phi_{n}\}_{n\in\mathbb{N}}$ the analytic continuations to $\mathbb{D}$ of the functions $\eta_{\mu}^{\langle-1\rangle}$ and $\{\eta_{\mu_{n}}^{\langle-1\rangle}\}_{n\in\mathbb{N}}$, and represent these functions as in \emph{(\ref{eq:Levi-Hincin T}), }using $(\gamma_{n},\sigma_{n})$ for the parameters corresponding to $\mu_{n}$. The following conditions are equivalent\emph{:} \begin{enumerate} \item The sequence $\{\mu_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$. \item The sequence $\{\eta_{\mu_{n}}\}_{n\in\mathbb{N}}$ converges pointwise to $\eta_{\mu}$ on $\mathbb{D}$. \item The sequence $\{\eta_{\mu_{n}}\}_{n\in\mathbb{N}}$ converges to $\eta_{\mu}$ uniformly on the compact subsets of $\mathbb{D}.$ \item The sequence $\{\Phi_{n}\}_{n\in\mathbb{N}}$ converges to $\Phi$ uniformly on the compact subsets of $\mathbb{D}$. \item The sequence $\{\gamma_{n}\}_{n\in\mathbb{N}}$ converges to $\gamma$ and the sequence $\{\sigma_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\sigma$. \end{enumerate} \end{prop} In preparation for the analog of Lemma \ref{lem:f_n tends to f etc}, we suppose that $\mu,\mu_{n},\Phi,\Phi_{n}$ are as in the preceding result, and we consider the continuous functions $R,R_{n}:\mathbb{T}\to(0,1]$ such that \[ \Omega_{\mu}=\{rt:t\in\mathbb{T},r\in[0,R(t))\},\ \Omega_{\mu_{n}}=\{rt:t\in\mathbb{T},r\in[0,R_{n}(t))\}. \] We also consider the homeomorphisms $h,h_{n}:\mathbb{T}\to\mathbb{T}$ defined by \[ h(t)=\frac{\eta_{\mu}(t)}{|\eta_{\mu}(t)|},\ h_{n}(t)=\frac{\eta_{\mu_{n}}(t)}{|\eta_{\mu_{n}}(t)|},\quad t\in\mathbb{T},n\in\mathbb{N}. \] The existence of these (orientation preserving) homeomorphisms is a consequence of the fact that $\Omega_{\mu_{n}}$ is starlike with respect to $0$, and of the fact that $\eta_{\mu_{n}}$ extends to a homeomorphism of $\overline{\mathbb{D}}$ onto $\overline{\Omega_{\mu_{n}}}$. Observe that we have \[ \eta_{\mu}(t)=R(h(t))h(t),\ \eta_{\mu_{n}}(t)=R_{n}(h_{n}(t))h_{n}(t)\quad t\in\mathbb{T},n\in\mathbb{N}. \] \begin{lem} \label{lem:convergence of quantities for T}With the above notation, suppose that the sequence $\{\mu_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$. Then\emph{:} \begin{enumerate} \item The sequence $\{R_{n}\}_{n\in\mathbb{N}}$ converges to $R$ uniformly on $\mathbb{T}$. \item The sequence $\{h_{n}\}_{n\in\mathbb{N}}$ converges to $h$ uniformly on $\mathbb{T}$. \item The sequence of inverses $\{h_{n}^{\langle-1\rangle}\}_{n\in\mathbb{N}}$ converges to $h^{\langle-1\rangle}$ uniformly on $\mathbb{T}$. \item The sequence $\{R_{n}\circ h_{n}\}_{n\in\mathbb{N}}$ converges to $R\circ h$ uniformly on $\mathbb{T}$. \item The sequence $\{\eta_{\mu_{n}}(t)\}_{n\in\mathbb{N}}$ converges to $\eta_{\mu}(t)$ uniformly on $\mathbb{T}$. \end{enumerate} \end{lem} \begin{proof} (1) Since $\mathbb{T}$ is compact, it suffices to show that, for every $t_{0}\in\mathbb{T}$ and for every $\varepsilon>0$ there exist $N\in\mathbb{N}$ and an arc $V\subset\mathbb{T}$ containing $t_{0}$ in its interior such that \[ R(t)-\varepsilon<R_{n}(t)<R(t)+\varepsilon,\quad t\in V,n\ge N. \] Fix $t_{0}$ and $\varepsilon$ and chose a compact neighborhood $V$ of $t_{0}$ such that |$R(t)-R(t_{0})|<\varepsilon/2$ for $t\in V$. Thus, \[ |\Phi((R(t_{0})-\varepsilon/2)t)|<1,\quad t\in V. \] The uniform convergence of $\Phi_{n}$ to $\Phi$ on the set $\{(R(t_{0})-\varepsilon/2)t:t\in V\}$ shows that there exists $N_{1}$ such that \[ |\Phi_{n}((R(t_{0})-\varepsilon/2)t)|<1,\quad t\in V,n\ge N_{1}, \] and thus \[ R_{n}(t)>R(t_{0})-\frac{\varepsilon}{2}>R(t)-\varepsilon,\quad t\in V,n\ge N_{1}. \] If $R(t_{0})+\varepsilon/2\ge1,$ the inequality $R_{n}(t)<R(t)+\varepsilon$ is automatically satisfied for $t\in V$. If $R(t_{0})+\varepsilon/2<1,$ we observe that \[ |\Phi((R(t_{0})+\varepsilon/2)t)|>1,\quad t\in V, \] and we choose $N_{2}$ such that \[ |\Phi_{n}((R(t_{0})+\varepsilon/2)t)|>1,\quad t\in V,n\ge N_{2}. \] Thus, \[ R_{n}(t)<R(t_{0})+\frac{\varepsilon}{2}<R(t)+\varepsilon,\quad t\in V,n\ge N_{2}, \] so it suffices to choose $N=\max\{N_{1},N_{2}\}$. (3) It suffices to prove pointwise convergence. We observe that $h_{n}^{\langle-1\rangle}(t)=\Phi_{n}(R_{n}(t)t)$. Since the measures $\sigma_{n}$ converge weakly, the sequence $\{\sigma_{n}(\mathbb{T})\}_{n\in\mathbb{N}}$ is bounded. Lemma \ref{lem:equicontinuity circle} shows that the restrictions $\Phi_{n}|\overline{\Omega_{\mu_{n}}}$ are equicontinuous. These facts, along with (1), imply the desired pointwise convergence. (2) This follows directly from (3). Then (4) and (5) follow as in the proof of Lemma \ref{lem:f_n tends to f etc}. \end{proof} As in the case of $\mathbb{R}_{+}$, the $\eta$-transform of an $\boxtimes$-infinitely divisible measure $\mu\in\mathcal{P}_{\mathbb{T}}^{*}$ may take the value $1$ at most once on $\mathbb{T}.$ If $\eta_{\mu}(t)=1,$ we write $D_{\mu}=\{\overline{t}\},$ otherwise $D_{\mu}=\varnothing$. The measure $\mu$ is absolutely continuous relative to arclength measure on $\mathbb{T}\backslash D_{\mu}$. We can now use the preceding result and (\ref{eq:density from eta, circle}) to prove the analog of Proposition \ref{prop:unif convergence of inf div densities pos line} for the circle. The details are left to the interested reader. \begin{prop} \label{prop:uniform conv circle}Let $\mu$ and $\{\mu_{n}\}_{n\in\mathbb{N}}$ be $\boxtimes$-infinitely divisible measures in $\mathcal{P}_{\mathbb{T}}^{\mathbb{*}}$ such that $\mu_{n}$ converges weakly to $\mu$. Let $K\subset\mathbb{T}\backslash D_{\mu}$ be an arbitrary compact set. Then $D_{\mu_{n}}\subset\mathbb{T}\backslash K$ for sufficiently large $n$, and the densities $p_{\mu_{n}}$ of $\mu_{n}$ relative to arclength measure converge to $p_{\mu}$ uniformly on $K$. If $D_{\mu}=\varnothing$, we can take $K=\mathbb{T}$. \end{prop} Finally, we derive a superconvergence result. \begin{thm} \label{thm:super T}Let $\{k_{n}\}_{n\in\mathbb{N}}\subset[2,+\infty)$ be a sequence with limit $+\infty$, and let $\mu$ and $\{\nu_{n}\}_{n\in\mathbb{N}}$ be measures in $\mathcal{P}_{\mathbb{T}}^{*}$ such that $\mu$ is $\boxtimes$-infinitely divisible and $\int_{\mathbb{T}}t\,d\nu_{n}(t)>0$ for every $n\in\mathbb{N}$. Suppose that the sequence $\{\nu_{n}^{\boxtimes k_{n}}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$. Let $K\subset\mathbb{T}\backslash D_{\mu}$ be an arbitrary compact set. Then $\nu_{n}^{\boxtimes k_{n}}$ is absolutely continuous on $K$ for sufficiently large $n$, and the densities $p_{n}$ of $\nu_{n}^{\boxtimes k_{n}}$ relative to arclength measure converge to $p_{\mu}$ uniformly on $K$. If $D_{\mu}=\varnothing$, we can take $K=\mathbb{T}$. \end{thm} \begin{proof} We first replace $\nu_{n}$ by $\nu_{n}\boxtimes\nu_{n}$ and $k_{n}$ by $k_{n}/2$. After this substitution we may assume that $\eta_{\nu_{n}}$ does not vanish on $\mathbb{D}$ and the convolution powers can be calculated as in Section \ref{sec:Free-mutiplicative-convolution on T}, using analytic subordination. Thus, there exist $\boxtimes$-infinitely divisible measures $\mu_{n}\in\mathcal{P}_{\mathbb{T}}^{*}$ satisfying the equations \begin{equation} \eta_{\nu_{n}^{\boxtimes k_{n}}}(z)=\eta_{\mu_{n}}(z)\left(\frac{\eta_{\mu_{n}}(z)}{z}\right)^{1/(k_{n}-1)},\quad z\in\mathbb{\overline{D}},\;n\in\mathbb{N},\label{eq:formula} \end{equation} and \[ \eta_{\nu_{n}^{\boxtimes k_{n}}}=\eta_{\nu_{n}}\circ\eta_{\mu_{n}},\quad n\in\mathbb{N}. \] As in the case of $\mathbb{R}_{+}$, the measures $\nu_{n}$ necessarily converge to $\delta_{1}$ as $n\to\infty,$ and thus $\eta_{\nu_{n}}(z)$ converges to $z$ uniformly for $z$ in a compact subset of $\mathbb{D}.$ The inverses $\eta_{\nu_{n}}^{\langle-1\rangle}$ converge uniformly to the identity function for $z$ in a neighborhood of $0$, and therefore \[ \eta_{\mu_{n}}=\eta_{\nu_{n}}^{\langle-1\rangle}\circ\eta_{\nu_{n}^{\boxtimes k_{n}}} \] converge uniformly on a neighborhood of $0$ to $\eta_{\mu}.$ We conclude that the sequence $\{\mu_{n}\}_{n\in\mathbb{N}}$ converges weakly to $\mu$. Lemma \ref{lem:convergence of quantities for T} implies now that the functions $\eta_{\mu_{n}}$ converge to $\eta_{\mu}$ uniformly on $\overline{\mathbb{D}},$ and therefore the functions \[ \left(\frac{\eta_{\mu_{n}}(z)}{z}\right)^{1/(k_{n}-1)},\quad z\in\overline{\mathbb{D}}, \] converge uniformly to $1$. Formula (\ref{eq:formula}) implies now that the sequence $\{\eta_{\nu_{n}^{\boxtimes k_{n}}}\}_{n\in\mathbb{N}}$ converges to $\eta_{\mu}$ uniformly on $\overline{\mathbb{D}}.$ The desired conclusion is now obtained easily by applying (\ref{eq:density from eta, circle}) to these measures. \end{proof} \section{Cusp behavior in $\mathcal{P}_{\mathbb{T}}$\label{sec:Cusp-behavior-in T}} This section is the counterpart of Section \ref{sec:cusps-in R_+} for $\mathbb{T}$. Thus, we consider the qualitative behavior of a convolution $\mu_{1}\boxtimes\mu_{2}$, where $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ are nondegenerate measures and $\mu_{2}$ is $\boxtimes$-infinitely divisible. Of course, all $\boxtimes$-infinitely divisible measures in $\mathcal{P}_{\mathbb{T}}$ belong to $\mathcal{P}_{\mathbb{T}}^{*}$, with the exception of the normalized arclength measure $m$. For this measure, we have $\mu\boxtimes m=m$, $\mu\in\mathcal{P}_{\mathbb{T}}$, so $m$ is the analog of the measure $\delta_{0}\in\mathcal{P}_{\mathbb{R}_{+}}$, and indeed it has the same moment sequence. We start with the analog of Lemma \ref{lem:omega is inf-div}. \begin{lem} \label{lem:omega is inf-div-1} Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ be such that $\mu_{2}$ is $\boxtimes$-infinitely divisible, and let $\rho_{1},\rho_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ be given by \textup{Theorem} \emph{\ref{thm:subordination by Bi}}. Then $\rho_{1}$ is $\boxtimes$-infinitely divisible. \end{lem} \begin{proof} Let $\Phi$ given by (\ref{eq:Levi-Hincin T}) be the analytic continuation of $\eta_{\mu_{2}}^{\langle-1\rangle}$ to $\mathbb{D}$. Thus, \[ \Phi(z)=zF(z),\quad z\in\mathbb{D}, \] where $F$ satisfies $|F(z)|\ge1$ for $z\in\mathbb{D}.$ Then the analog of (\ref{eq:defining boxtimes}) for $\mathcal{P}_{\mathbb{T}}^{*}$ can be written as \[ F(z)\eta_{\mu_{1}}^{-1}(z)=\eta_{\mu_{1}\boxtimes\mu_{2}}^{\langle-1\rangle}(z), \] and applying this equality with $\eta_{\mu_{1}}(z)$ in place of $z$, we obtain \[ F(\eta_{\mu_{1}}(z))z=\eta_{\mu_{1}\boxtimes\mu_{2}}^{\langle-1\rangle}(\eta_{\mu_{1}}(z))=\eta_{\rho_{1}}^{\langle-1\rangle}(z) \] for $z$ in some neighborhood of zero. The lemma follows because the function \[ G(z)=F(\eta_{\mu_{1}}(z)),\quad z\in\mathbb{D}, \] also satisfies the inequality $|G(z)|\ge1$ for $z\in\mathbb{D}$. \end{proof} With the notation of the preceding lemma, we recall that the domain \[ \Omega_{\rho_{1}}=\eta_{\rho_{1}}(\mathbb{D}) \] can be described as \[ \Omega_{\rho_{1}}=\{rt:t\in\mathbb{T},0\le r<R(t)\} \] for some continuous function $R:\mathbb{T}\to(0,1]$, and that $\eta_{\rho_{1}}$ extends to a homeomorphism of $\overline{\mathbb{D}}$ onto $\overline{\Omega_{\rho_{1}}}$. Using, the analytic continuation \[ \Psi(z)=zG(z),\quad z\in\mathbb{D}, \] of $\eta_{\rho_{1}}^{\langle-1\rangle}$, the map \[ \Psi|\partial\Omega_{\rho_{1}},\quad \partial\Omega_{\rho_{1}}=\{R(t)t:t\in\mathbb{T}\}, \] is a homeomorphism from $\partial\Omega_{\rho_{1}}$ onto $\mathbb{T}$. The density $p_{\mu_{1}\boxtimes\mu_{2}}$ of $\mu_{1}\boxtimes\mu_{2}$, relative to arclength measure $2\pi dm$ on $\mathbb{T}$, is calculated using the formula \begin{equation} p_{\mu_{1}\boxtimes\mu_{2}}(\xi)=\begin{cases} \frac{1}{2\pi}\Re\frac{1+\eta_{\mu_{1}}(R(t)t)}{1-\eta_{\mu_{1}}(R(t)t)}, & \text{if \ensuremath{\xi=\frac{1}{\Psi(R(t)t)}\text{ and }}}R(t)<1,\\ 0, & \text{if \ensuremath{\xi=\frac{1}{\Psi(R(t)t)}\text{ and }}}R(t)=1. \end{cases}\label{eq:density boxtimes T} \end{equation} As noted earlier, this density is real analytic at all points where it is nonzero. Using the Herglotz formula for analytic functions with a positive real part, we write the function $F$ above as \[ F(z)=\gamma\exp(H_{\sigma}(z)),\quad z\in\mathbb{D}, \] where $|\gamma|=1$ and $\sigma$ is a finite, positive Borel measure on $\mathbb{T}$. The appropriate analog of the semicircular measure is obtained when $\sigma$ is a point mass at $1\in\mathbb{T},$ that is, \[ F(z)=\gamma\exp\left[\beta\frac{1+z}{1-z}\right],\quad z\in\mathbb{D}, \] for some $\gamma\in\mathbb{T}$ and $\beta>0$. The following proposition examines the density of $\mu_{1}\boxtimes\mu_{2}$ when $\mu_{2}$ is one of these measures. (The formula (\ref{eq:p vs R}) below also appeared in \cite{Zhong}.) We use the notation $p'$ for the derivative $dp(e^{i\theta})/d\theta$ if $p$ is a differentiable function defined on some open subset of $\mathbb{T}$. \begin{prop} \label{prop:boxtimes convo with semisemi-1} Suppose that $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ are nondegenerate measures, and that $\mu_{2}$ is such that \[ \gamma z\exp\left[\beta\frac{1+z}{1-z}\right],\quad z\in\mathbb{D}, \] is an analytic continuation of $\eta_{\mu_{2}}^{\langle-1\rangle}$ for some $\gamma\in\mathbb{T}$ and $\beta\in(0,+\infty)$. Let $p_{\mu_{1}\boxtimes\mu_{2}}$ denote the density of $\mu_{1}\boxtimes\mu_{2}$ relative to the arclength measure $2\pi\,dm$. Then\emph{:} \begin{enumerate} \item $\left|p_{\mu_{1}\boxtimes\mu_{2}}'(\xi)\right|p_{\mu_{1}\boxtimes\mu_{2}}(\xi)^{2}\le 7/(8\pi^{3}\beta^{3})$ for every $\xi\in\mathbb{T}$ such that $0<p_{\mu_{1}\boxtimes\mu_{2}}(\xi)\le \log2/(2\pi\beta)$. \item $\left|p_{\mu_{1}\boxtimes\mu_{2}}'(\xi)\right|\le7/\pi\beta$ for every $\xi\in\mathbb{T}$ such that $p_{\mu_{1}\boxtimes\mu_{2}}(\xi)\ge\log2/(2\pi\beta)$. \item If $I\subset\mathbb{T}$ is an arc with one endpoint $\xi_{0}$, $p_{\mu_{1}\boxtimes\mu_{2}}(\xi)>0$ for $\xi\in I$, and $p_{\mu_{1}\boxtimes\mu_{2}}(\xi_{0})=0$, then \[ p_{\mu_{1}\boxtimes\mu_{2}}(\xi)\le\frac{2}{\pi\beta}|\xi-\xi_{0}|^{1/3} \] for $\xi\in I$ close to $\xi_{0}$. \end{enumerate} \end{prop} \begin{proof} Part (3) follows from (1) by integration since $\sqrt[3]{21/4}<2$ and $\ell(\xi,\xi_{0})<2|\xi-\xi_{0}|$ if $\xi$ is close to $\xi_{0}$; here $\ell(\xi,\xi_{0})$ denotes the length of the (short) arc joining $\xi$ and $\xi_{0}$. As seen in the preceding lemma, $\eta_{\rho_{1}}^{\langle-1\rangle}$ has the analytic continuation \[ \Psi(z)=zG(z)=F(\eta_{\mu_{1}}(z))=\gamma z\exp\left[\beta u(z)\right],\quad z\in\mathbb{D}, \] where \begin{align*} u(z)=\frac{1+\eta_{\mu_{1}}(z)}{1-\eta_{\mu_{1}}(z)} & =1+2\psi_{\mu_{1}}(z)\\ & =\int_{\mathbb{T}}\left[1+\frac{2\xi z}{1-\xi z}\right]\,d\mu_{1}(\xi)\\ & =\int_{\mathbb{T}}\left[\frac{1+\xi z}{1-\xi z}\right]\,d\mu_{1}(\xi)\\ & =\int_{\mathbb{T}}\left[\frac{\xi+z}{\xi-z}\right]\,d\mu_{1}(1/\xi) \end{align*} is precisely the Herglotz integral of the measure $d\mu_{1}(1/\xi)=d\mu_{1}(\overline{\xi})$. Thus, when the boundary of the domain $\Omega_{\rho_{1}}$ is parametrized as $z(t)=R(t)t$, $t\in\mathbb{T},$ we have \begin{equation} \beta\int_{\mathbb{T}}\frac{d\mu_{1}(\overline{\xi})}{|\xi-z(t)|^{2}}=\frac{\log R(t)}{R(t)^{2}-1}\label{eq:boundary of omega (circle)} \end{equation} whenever $R(t)<1$. (We will use implicitly the easily established inequality \[ \frac{2r\log r}{r^{2}-1}<1, \] valid for $r\in(0,1)$. In fact the function \[ \frac{2r\log r}{r^{2}-1} \] is increasing for $r\in(0,1)$ and it tends to $1$ at $r=1$.) Setting \[ f(t)=\frac{1}{\Psi(z(t))}, \] for $R(t)<1$, we see from (\ref{eq:density boxtimes T}) and (\ref{eq:boundary of omega (circle)}) that \begin{align} p_{\mu_{1}\boxtimes\mu_{2}}(f(t)) & =\frac{1}{2\pi}\Re\frac{1+\eta_{\mu_{1}}(z(t))}{1-\eta_{\mu_{1}}(z(t))}\nonumber \\ & =\frac{1}{2\pi}\int_{\mathbb{T}}\frac{1-|z(t)|^{2}}{|\xi-z(t)|^{2}}\,d\mu_{1}(\overline{\xi})\label{eq:p vs R}\\ & =\frac{1}{2\pi\beta}\beta\int_{\mathbb{T}}\frac{1-|z(t)|^{2}}{|\xi-z(t)|^{2}}\,d\mu_{1}(\overline{\xi})=\frac{-\log R(t)}{2\pi\beta}.\nonumber \end{align} As in the case of $\mathbb{R}_{+}$, this allows us to use the chain rule for our estimates. We begin with the derivative of $f$ that can be estimated as \begin{align*} |f'(t)| & =\left|\frac{f'(t)}{f(t)}\right|=\left|\frac{\Psi'(z(t))}{\Psi(z(t))}\right||z'(t)|. \end{align*} Here, $\Psi'$ is the usual complex derivative of $\Psi$, \[ \left|\frac{\Psi'(z)}{\Psi(z)}\right|=\left|\frac{1}{z}+\beta u'(z)\right|=\left|\frac{1}{z}+\beta\int_{\mathbb{T}}\frac{2\xi}{(\xi-z)^{2}}d\mu_{1}(\overline{\xi})\right|, \] so using (\ref{eq:boundary of omega (circle)}) we obtain \begin{align*} \left|\frac{\Psi'(z(t))}{\Psi(z(t))}\right| & =\left|\frac{1}{R(t)t}+\beta\int_{\mathbb{T}}\frac{2\xi}{(\xi-R(t)t)^{2}}d\mu_{1}(\overline{\xi})\right|\\ & =\frac{1}{R(t)}\left|1+\beta\int_{\mathbb{T}}\frac{2\xi R(t)t}{(\xi-R(t)t)^{2}}d\mu_{1}(\overline{\xi})\right|\\ & \ge\frac{1}{R(t)}\left[1-\frac{2R(t)\log R(t)}{R(t)^{2}-1}\right]. \end{align*} For the second factor $|z'(t)|$, we have \[ z'(e^{i\theta})=\frac{d}{d\theta}R(e^{i\theta})e^{i\theta}=[R'(e^{i\theta})+iR(e^{i\theta})]e^{i\theta}, \] and thus \[ |z'(t)|=\sqrt{R(t)^{2}+R'(t)^{2}.} \] Putting these together, we see that \begin{align*} |f'(t)| & \ge\sqrt{1+\left(\frac{R'(t)}{R(t)}\right)^{2}}\left[1-\frac{2R(t)\log R(t)}{R(t)^{2}-1}\right]\\ & \ge\left|\frac{R'(t)}{R(t)}\right|\left[1-\frac{2R(t)\log R(t)}{R(t)^{2}-1}\right]. \end{align*} Since \[ \left|\frac{R'(t)}{R(t)}\right|=|(\log R)'(t)|, \] formula (\ref{eq:p vs R}) yields the estimate \begin{align*} \left|p_{\mu_{1}\boxtimes\mu_{2}}'(f(t))\right| & =\frac{|(\log R)'(t)|}{2\pi\beta|f'(t)|}\\ & \le\frac{1}{2\pi\beta}\frac{1}{1-\frac{2R(t)\log R(t)}{R(t)^{2}-1}}. \end{align*} The inequality $p_{\mu_{1}\boxtimes\mu_{2}}(f(t))>(\log2)/2\pi\beta$ amounts to $R(t)<1/2$, and the preceding estimate yields \[ \left|p_{\mu_{1}\boxtimes\mu_{2}}'(f(t))\right|\le\frac{1}{2\pi\beta}\frac{1}{1-\frac{4}{3}\log2}<\frac{7}{\pi\beta}, \] thus verifying (2). Finally, we have \[ \left|p_{\mu_{1}\boxtimes\mu_{2}}'(f(t))\right|p_{\mu_{1}\boxtimes\mu_{2}}(f(t))^{2}\le\frac{1}{(2\pi\beta)^{3}}\frac{\log^{2}R(t)}{1-\frac{2R(t)\log R(t)}{R(t)^{2}-1}} \] and the fact that \[ \frac{\log^{2}r}{1-\frac{2r\log r}{r^{2}-1}} \] is less than $7$ for $r\in(1/2,1)$ yields (1). \end{proof} Next, we state an analog of Lemma \ref{lem:trade a free convolution for another}. The verification is a simple calculation. \begin{lem} \label{lem:trade a free convolution for another-1}Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ be two nondegenerate measures such that $\mu_{2}$ is $\boxtimes$-infinitely divisible, and let \[ \Phi(z)=\gamma z\exp H_{\sigma}(z),\quad z\in\mathbb{D}, \] be an analytic continuation of $\eta_{\mu_{2}}^{\langle-1\rangle}$. Assume that $\int_{\mathbb{T}}t\,d\sigma(t)\neq0$. Denote by $\rho_{1}\in\mathcal{P}_{\mathbb{T}}^{*}$ the $\boxtimes$-infinitely divisible measure such that $\eta_{\rho_{1}}^{\langle-1\rangle}$ has the analytic continuation \[ \Psi(z)=\gamma z\exp\left[\int_{\mathbb{T}}\frac{t+\eta_{\mu_{1}}(z)}{t-\eta_{\mu_{1}}(z)}\,d\sigma(t)\right],\quad z\in\mathbb{D}. \] Set $\beta=\sigma(\mathbb{T})$ and denote by $\nu_{1}\in\mathcal{P}_{\mathbb{T}}^{*}$ the measure satisfying \[ \psi_{\nu_{1}}(z)=\frac{1}{\beta}\int_{\mathbb{T}}\frac{t\eta_{\mu_{1}}(z)}{1-t\eta_{\mu_{1}}(z)}\,d\sigma(\overline{t}),\quad z\in\mathbb{D}, \] and denote by $\nu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ the $\boxtimes$-infinitely divisible measure such that $\eta_{\nu_{2}}^{\langle-1\rangle}$ has the analytic continuation \[ \gamma z\exp\left[\beta\frac{1+z}{1-z}\right],\quad z\in\mathbb{D}. \] Then $\eta_{\mu_{1}\boxtimes\mu_{2}}=\eta_{\mu_{1}}\circ\eta_{\rho_{1}}$ and $\eta_{\nu_{1}\boxtimes\nu_{2}}=\eta_{\nu_{1}}\circ\eta_{\rho_{1}}$. \end{lem} In preparation for the final proof in this section, we recall some facts demonstrated in \cite[Theorem 4.5]{huang-wang} whose proof is based on \cite[Proposition 4.5]{B-B-IMRN} and the chain rule for Julia-Carath\'{e}odory derivative. Suppose that $\mu_{1},\mu_{2}$, and $\rho_{1}$ are as in the preceding lemma, and that the domain $\Omega_{\rho_{1}}$ is described as \[ \Omega_{\rho_{1}}=\{rt:t\in\mathbb{T},0\le r<R(t)\} \] for some continuous function $R:\mathbb{T}\to(0,1]$. The map $\eta_{\mu_{1}}$ extends continuously to the closure $\overline{\Omega_{\rho_{1}}}$, and the set \[ \partial\Omega_{\rho_{1}}\cap\mathbb{T}=\{t\in\mathbb{T}:R(t)=1\} \] can be partitioned into two subsets $A$ and $B$ described as follows. \begin{enumerate} \item $A$ consists of those points $t\in\mathbb{T}$ for which $\mu_{1}(\{\overline{t}\})>0$ and \[ \frac{\mu_{1}(\{\overline{t}\})}{2}\ge\int_{\mathbb{T}}\frac{d\sigma(\xi)}{|1-\xi|^{2}}. \] \item $B$ consists of those $t\in\mathbb{T}$ for which $\eta_{\mu_{1}}(t)\in\mathbb{T}\backslash\{1\},$ \[ c=\liminf_{z\to t}\frac{1-|\eta_{\mu_{1}}(z)|}{1-|z|}\in (0,+\infty), \] and \[ c\int_{\mathbb{T}}\frac{d\sigma(\xi)}{|\eta_{\mu_{1}}(t)-\xi|^{2}}\le \frac{1}{2}. \] \end{enumerate} \begin{thm} \label{thm:cusp on T}Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{T}}^{*}$ be two nondegenerate measures such that $\mu_{2}$ is $\boxtimes$-infinitely divisible and satisfies the hypothesis of \textup{Lemma}\emph{ \ref{lem:trade a free convolution for another-1}}. Suppose that $\Gamma\subset\mathbb{T}$ is an open arc with an endpoint $\xi_{0}$, $p_{\mu_{1}\boxtimes\mu_{2}}(\xi_{0})=0<p_{\mu_{1}\boxtimes\mu_{2}}(\xi)$ for every $\xi\in\Gamma$, and---using the notation of \textup{Lemma}\emph{ \ref{lem:trade a free convolution for another-1}}---$1/\eta_{\rho}{}_{_{1}}\left(\overline{\xi_{0}}\right)$ is not an atom of $\mu_{1}.$ Then $p_{\mu_{1}\boxtimes\mu_{2}}(\xi)/|\xi-\xi_{0}|^{1/3}$ is bounded for $\xi\in\Gamma$ close to $\xi_{0}$. \end{thm} \begin{proof} Using the notation of Lemma \ref{lem:trade a free convolution for another-1}, we observe that \begin{align*} \{\xi\in\mathbb{T}:p_{\mu_{1}\boxtimes\mu_{2}}(\xi)>0\} & =\{\overline{\Psi(R(t)t)}:R(t)<1\}\\ & =\{\xi\in\mathbb{T}:p_{\nu_{1}\boxtimes\nu_{2}}(\xi)>0\}. \end{align*} By Proposition \ref{prop:boxtimes convo with semisemi-1}, the conclusion of the theorem is true if $\mu_{1}$ and $\mu_{2}$ are replaced by $\nu_{1}$ and $\nu_{2}$, respectively. It will therefore suffice to prove that the ratio $p_{\nu_{1}\boxtimes\nu_{2}}(\xi)/p_{\mu_{1}\boxtimes\mu_{2}}(\xi)$ is bounded away from zero for $\xi$ close to $\xi_{0}$. The hypothesis implies that the number $\alpha=1/\eta_{\rho}{}_{_{1}}(\overline{\xi_{0}})$ belongs to the set $B$ described before the statement of the theorem. Using the usual parametrization $z=\eta_{\rho_{1}}(\overline{\xi})$, the relations $\eta_{\mu_{1}\boxtimes\mu_{2}}=\eta_{\mu_{1}}\circ\eta_{\rho_{1}}$ and $\eta_{\nu_{1}\boxtimes\nu_{2}}=\eta_{\nu_{1}}\circ\eta_{\rho_{1}}$ yield \[ \gamma z\exp\left[\int_{\mathbb{T}}\frac{t+\eta_{\mu_{1}}(z)}{t-\eta_{\mu_{1}}(z)}\,d\sigma(t)\right]=\Psi(z)=\gamma z\exp\left[\beta\frac{1+\eta_{\nu_{1}}(z)}{1-\eta_{\nu_{1}}(z)}\right]. \] Equating the absolute values of these quantities yields \[ \int_{\mathbb{T}}\frac{1-|\eta_{\mu_{1}}(z)|^{2}}{|t-\eta_{\mu_{1}}(z)|^{2}}\,d\sigma(t)=\beta\frac{1-|\eta_{\nu_{1}}(z)|^{2}}{|1-\eta_{\nu_{1}}(z)|^{2}}, \] or, equivalently \[ \left[\Re\frac{1+\eta_{\mu_{1}}(z)}{1-\eta_{\mu_{1}}(z)}\right]\int_{\mathbb{T}}\frac{|1-\eta_{\mu_{1}}(z)|^{2}}{|t-\eta_{\mu_{1}}(z)|^{2}}\,d\sigma(t)=\beta\Re\frac{1+\eta_{\nu_{1}}(z)}{1-\eta_{\nu_{1}}(z)}. \] Applying (\ref{eq:density boxtimes T}) we rewrite this as \[ \frac{p_{\nu_{1}\boxtimes\nu_{2}}(\xi)}{p_{\mu_{1}\boxtimes\mu_{2}}(\xi)}=\frac{|1-\eta_{\mu_{1}}(z)|^{2}}{\beta}\int_{\mathbb{T}}\frac{d\sigma(t)}{|t-\eta_{\mu_{1}}(z)|^{2}},\quad\xi\in\Gamma. \] The desired result follows now from the definition of the set $B$ and an application of Fatou's lemma. \end{proof} \begin{rem} With the notation of the preceding proof, we have $|\Psi(z)|=1$ for the relevant points $z,$ implying further that \[ \int_{\mathbb{T}}\frac{|1-\eta_{\mu_{1}}(z)|^{2}}{|t-\eta_{\mu_{1}}(z)|^{2}}\,d\sigma(t)=\frac{|1-\eta_{\mu_{1}}(z)|^{2}\log|z|}{|\eta_{\mu_{1}}(z)|^{2}-1}. \] It follows that \[ \frac{p_{\mu_{1}\boxtimes\mu_{2}}(\xi)}{p_{\nu_{1}\boxtimes\nu_{2}}(\xi)}=\beta\frac{|\eta_{\mu_{1}}(z)|^{2}-1}{|1-\eta_{\mu_{1}}(z)|^{2}\log|z|}, \] and it is easily seen that this ratio is also bounded away from zero near $\xi_{0}$. \end{rem} \section{Free additive convolution on $\mathcal{P}_{\mathbb{R}}$\label{sec:Free-additive-convolution}} The free additive convolution $\boxplus$ is a binary operation defined on $\mathcal{P}_{\mathbb{R}}$, the family of all probability measures on $\mathbb{R}.$ The Cauchy transform of a measure $\mu\in\mathcal{P}_{\mathbb{R}}$, already seen in Section \ref{sec:Free-multiplicative-convolution}, is defined by \[ G_{\mu}(z)=\int_{\mathbb{R}}\frac{d\mu(t)}{z-t},\quad z\in\mathbb{H}, \] and the density $d\mu/dt$ of $\mu$ is equal almost everywhere to $(-1/\pi)\Im G_{\mu}(x)$, where the boundary limit \[ G_{\mu}(x)=\lim_{y\downarrow0}G_{\mu}(x+iy),\quad x\in\mathbb{R}, \] exists almost everywhere on $\mathbb{R}$. The \emph{reciprocal} Cauchy transform \[ F_{\mu}(z)=\frac{1}{G_{\mu}(z)},\quad z\in\mathbb{H}, \] maps $\mathbb{H}$ to itself, and the collection \{$F_{\mu}:\mu\in\mathcal{P}_{\mathbb{R}}\}$ consists precisely of those analytic functions $F:\mathbb{H}\to\mathbb{H}$ with the property that \[ \lim_{y\uparrow\infty}\frac{F(iy)}{iy}=1. \] As seen, for instance, in \cite{Akhiezer}, these functions have a Nevanlinna representation of the form \[ F(z)=\gamma+z-N_{\sigma}(z),\quad z\in\mathbb{H}, \] where $\gamma\in\mathbb{R}$ and \[ N_{\sigma}(z)=\int_{\mathbb{R}}\frac{1+tz}{z-t}\,d\sigma(t) \] for some finite positive Borel measure $\sigma$ on $\mathbb{R}$. This integral representation implies that \[ \Im F(z)\geq\Im z,\quad z\in\mathbb{H}. \] Given a measure $\mu\in\mathcal{P}_{\mathbb{R}}$, the function $F_{\mu}$ is conformal in an open set $U$ containing $\{iy:y\in(\alpha,+\infty)\}$ for some $\alpha>0$, and the restriction $F_{\mu}|U$ has an inverse $F_{\mu}^{\langle-1\rangle}$ defined in an open set containing another set of the form $\{iy:y\in(\beta,+\infty)\}$ with $\beta>0$. The free additive convolution $\mu_{1}\boxplus\mu_{2}$ of two measures $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}}$ is the unique measure $\mu\in\mathcal{P}_{\mathbb{R}}$ that satisfies the identity \begin{equation} z+F_{\mu}^{\langle-1\rangle}(z)=F_{\mu_{1}}^{\langle-1\rangle}(z)+F_{\mu_{2}}^{\langle-1\rangle}(z)\label{eq:defining boxtimes-2} \end{equation} for $z$ in some open set containing $iy$ for $y$ large enough (see \cite{BV-unbounded}). The analog of Theorems \ref{thm:subordination on the line (mult)} and \ref{thm:subordination by Bi} is as follows. \begin{thm}\cite{Bi-free inc} \label{thm:subordination on the line (mult)-2} For every $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}},$ there exist unique $\rho_{1},\rho_{2}\in\mathcal{P}_{\mathbb{R}}$ such that \[ F_{\mu_{1}}(F_{\rho_{1}}(z))=F_{\mu_{2}}(F_{\rho_{2}}(z))=F_{\rho_{1}}(z)+F_{\rho_{2}}(z)-z,\quad z\in\mathbb{H}. \] Moreover, we have $F_{\mu_{1}\boxplus\mu_{2}}=F_{\mu_{1}}\circ F_{\rho_{1}}$. If $\mu_{1}$ and $\mu_{2}$ are nondegenerate, then so are $\rho_{1}$ and $\rho_{2}$. \end{thm} It was shown in \cite{vo-add,BV-unbounded} that a measure $\mu\in\mathcal{P}_{\mathbb{R}}$ is $\boxplus$-infinitely divisible precisely when the inverse $F_{\mu}^{\langle-1\rangle}$ continues analytically to $\mathbb{H}$ and this analytic continuation has the Nevanlinna form \begin{equation} \Phi(z)=\gamma+z+N_{\sigma},\quad z\in\mathbb{H},\label{eq:extension of eta inverse (line)-2} \end{equation} for some $\gamma$ and $\sigma$. The functions described by (\ref{eq:extension of eta inverse (line)-2}) can also be characterized by \[ \lim_{y\uparrow+\infty}\frac{\Phi(iy)}{iy}=1\text{ and }\Im\Phi(z)\le\Im z,\quad z\in\mathbb{H}. \] Suppose now that $\mu\in\mathcal{P}_{\mathbb{R}}$ is $\boxplus$-infinitely divisible and that $F_{\mu}^{\langle-1\rangle}$ has the analytic continuation given in (\ref{eq:extension of eta inverse (line)-2}). The equation $\Phi(F_{\mu}(z))=z$ holds in some open set and therefore it holds on the entire $\mathbb{\mathbb{H}}$ by analytic continuation. In particular, $F_{\mu}$ maps $\mathbb{H}$ conformally onto a domain $\Omega_{\mu}\subset\mathbb{H}$ that can be described as \[ \Omega_{\mu}=\{z\in\mathbb{H}:\Phi(z)\in\mathbb{H}\}. \] As in the multiplicative cases, this domain can also be identified with $\{x+iy:y>f(x)\}$ for some continuous function $f:\mathbb{R}\to[0,+\infty)$. The map $F_{\mu}$ extends continuously to the closure $\overline{\mathbb{H}}$, $\Phi$ extends continuously to $\overline{\Omega_{\mu}}$, and these two extensions are homeomorphisms, inverse to each other. (See Section 2 of \cite{BWZ-super+} for the details and \cite{huang} for similar results in the context of free semigroups.) \section{Cusp behavior in $\mathcal{P}_{\mathbb{R}}$ \label{sec:Cusp-behavior-in R}} We are now ready for the counterpart of Sections \ref{sec:cusps-in R_+} and \ref{sec:Cusp-behavior-in T} in the context of the free additive convolution. Thus, we study the density of a measure of the form $\mu_{1}\boxplus\mu_{2}$, where $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}}$ and $\mu_{2}$ is $\boxplus$-infinitely divisible. The following result is essentially contained in \cite{Bi-cusp} and the brief argument is included here to establish notation. \begin{lem} \label{lem:subordonation is infinitely div, R} Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}}$ be such that $\mu_{2}$ is $\boxplus$-infinitely divisible, and let $\rho_{1},\rho_{2}\in\mathcal{P}_{\mathbb{R}}$ be given by \textup{Theorem} \emph{\ref{thm:subordination on the line (mult)-2}. }Then $\rho_{1}$ is $\boxplus$-infinitely divisible. \end{lem} \begin{proof} Let $\Phi(z)=\gamma+z+N_{\sigma}(z)$ given by (\ref{eq:extension of eta inverse (line)-2}) be the analytic continuation of $F_{\mu_{2}}^{\langle-1\rangle}$ to $\mathbb{H}.$ Then (\ref{eq:defining boxtimes-2}) can be rewritten as \[ F_{\mu_{1}}^{\langle-1\rangle}(z)+\gamma+N_{\sigma}(z)=F_{\mu_{1}\boxplus\mu_{2}}^{\langle-1\rangle}(z) \] in a neighborhood of infinity. Replacing $z$ by $F_{\mu_{1}}(z)$ yields \[ \gamma+z+N_{\sigma}(F_{\mu_{1}}(z))=F_{\mu_{1}\boxplus\mu_{2}}^{\langle-1\rangle}(F_{\mu_{1}}(z)), \] and therefore the function \[ \Psi(z)=\gamma+z+N_{\sigma}(F_{\mu_{1}}(z)),\quad z\in\mathbb{H}, \] is an analytic continuation of $F_{\rho_{1}}^{\langle-1\rangle},$ thus establishing the conclusion of the lemma. \end{proof} The density $p_{\mu_{1}\boxplus\mu_{2}}$ of $\mu_{1}\boxplus\mu_{2}$ relative to Lebesgue measure has already been studied in \cite{Bi-cusp} for the special case in which $\mu_{2}$ is a semicircular law, that is, the measure $\sigma$ is a point mass at $0$. The following result is \cite[Corollary 5]{Bi-cusp}. \begin{prop} \label{prop:Biane estimate} With the notation above, suppose that $\sigma=\beta\delta_{0}$ for some $\beta>0$. If $I\subset\mathbb{R}$ is an open interval with an endpoint $x_{0}$ such that $p_{\mu_{1}\boxplus\mu_{2}}(x_{0})=0<p_{\mu_{1}\boxplus\mu_{2}}(x)$ for every $x\in I$, then \[ p_{\mu_{1}\boxplus\mu_{2}}(x)\le\left[\frac{3}{4\pi^{3}\beta^{2}}|x-x_{0}|\right]^{1/3},\quad x\in I. \] \end{prop} In order to extend this result to general $\boxplus$-infinitely divisible measures $\mu_{2}$, we proceed as in the multiplicative cases. Thus, we construct another convolution, this time with a semicircular measure, with the property that the two convolutions share the same subordination function. \begin{lem} \label{lem:trading convolutions, additive}Let $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}}$ be such that $\mu_{2}$ is $\boxplus$-infinitely divisible, and let \[ \Phi(z)=\gamma+z+N_{\sigma}(z),\quad z\in\mathbb{H}, \] be the analytic continuation of $F_{\mu_{2}}^{\langle-1\rangle}$. Denote by $\rho_{1}\in\mathcal{P}_{\mathbb{R}}$ the $\boxplus$-infinitely divisible measure such that $F_{\rho_{1}}^{\langle-1\rangle}$ has the analytic continuation \[ \Psi(z)=\gamma+z+N_{\sigma}(F_{\mu_{1}}(z)),\quad z\in\mathbb{H}. \] Suppose that $\beta=\int_{\mathbb{R}}(1+t^{2})\,d\sigma(t)$ is finite, and set $\gamma'=\gamma+\int_{\mathbb{R}}t\,d\sigma(t)$. Denote by $\nu_{1}\in\mathcal{P}_{\mathbb{R}}$ the probability measure satisfying \[ G_{\nu_{1}}(z)=\frac{1}{\beta}\int_{\mathbb{R}}\frac{1+t^{2}}{F_{\mu_{1}}(z)-t}\,d\sigma(t),\quad z\in\mathbb{H}, \] and let $\nu_{2}\in\mathcal{P}_{\mathbb{R}}$ be the semicircular measure such that \[ \gamma'+z+\frac{\beta}{z},\quad z\in\mathbb{H}, \] is an analytic continuation of $F_{\nu_{2}}^{\langle-1\rangle}$. Then $F_{\mu_{1}\boxplus\mu_{2}}=F_{\mu_{1}}\circ F_{\rho_{1}}$ and $F_{\nu_{1}\boxplus\nu_{2}}=F_{\nu_{1}}\circ F_{\rho_{1}}$. \end{lem} \begin{proof} The final assertion of the lemma is an easy verification that $F_{\mu_{1}\boxplus\mu_{2}}^{\langle-1\rangle}\circ F_{\mu_{1}}=F_{\nu_{1}\boxplus\nu_{2}}^{\langle-1\rangle}\circ F_{\nu_{1}}.$ One has to verify however that the measure $\nu_{1}$ actually exists, and that amounts to showing that the reciprocal \[ F(z)=\frac{\beta}{\int_{\mathbb{R}}\frac{1+t^{2}}{F_{\mu_{1}}(z)-t}\,d\sigma(t)} \] maps $\mathbb{H}$ to itself and that \[ \lim_{y\uparrow+\infty}\frac{F(iy)}{iy}=1. \] These facts follow from the corresponding properties of the function $F_{\mu_{1}}$. \end{proof} We will use a decomposition, analogous to those for multiplicative convolutions on $\mathbb{R}_{+}$ and $\mathbb{T}$. With the notation of the preceding lemma, represent the domain $\Omega_{\rho_{1}}=F_{\rho_{1}}(\mathbb{H})$ as \[ \Omega_{\rho_{1}}=\{x+iy:x\in\mathbb{R},y>f(x)\}, \] where $f:\mathbb{R}\to[0,+\infty)$ is a continuous function. We recall from \cite{huang-wang} that $G_{\mu_{1}}$ extends continuously to the closure $\overline{\Omega_{\rho_{1}}}$ provided that $\infty$ is allowed as a possible value. Based on \cite[Proposition 4.7]{B-B-IMRN}, it was shown in \cite[Theorem 3.6]{huang-wang} that the set $\partial\Omega_{\rho_{1}}\cap\mathbb{R}=\{\alpha\in\mathbb{R}:f(\alpha)=0\}$ can be partitioned into three sets $A,B,$ and $C$ described as follows. \begin{enumerate} \item $A$ consists of those points satisfying $\mu_{1}(\{\alpha\})>0$ and \[ \int_{\mathbb{R}}\left\{ 1+\frac{1}{t^{2}}\right\} \,d\sigma(t)\le\mu_{1}(\{\alpha\}). \] \item $B$ is characterized by the conditions $G_{\mu_{1}}(\alpha)\in\mathbb{R}\backslash\{0\}$ and \[ \left[\int_{\mathbb{R}}\frac{1+t^{2}}{(1-tG_{\mu_{1}}(\alpha))^{2}}\,d\sigma(t)\right]\left[\int_{\mathbb{R}}\frac{d\mu_{1}(t)}{(\alpha-t)^{2}}\right]\le1. \] \item $C$ consists of those $\alpha$ satisfying $G_{\mu_{1}}(\alpha)=0$ and \[ {\rm var}(\mu_{2})\int_{\mathbb{R}}\frac{d\mu_{1}(t)}{(\alpha-t)^{2}}\le1, \] where \[ {\rm var}(\mu_{2})=\int_{\mathbb{R}}t^{2}\,d\mu_{2}(t)-\left[\int_{\mathbb{R}}t\,d\mu_{2}(t)\right]^{2} \] denotes the variance of $\mu_{2}$. \end{enumerate} These inequalities provide a quantitative way to determine the zeros of the density $p_{\mu_{1}\boxplus \mu_{2}}$, because $p_{\mu_{1}\boxplus \mu_{2}}=-\pi^{-1}\Im (G_{\mu_1}\circ F_{\rho_1})$ on $\mathbb{R}$ and $F_{\rho_1}\left(\mathbb{R}\right)=\partial\Omega_{\rho_{1}}$. As in the multiplicative cases, they are derived from the chain rule of Julia-Carath\'eodory derivative. In each of the preceding inequalities, the improper integrals converge. Equality in each case is achieved precisely when $F_{\rho_{1}}$ has an infinite Julia-Carath\'eo\-dory derivative at the point $\Psi(\alpha)$. The set $A$ is always finite unless $\mu_{2}$ is a degenerate measure. Moreover, if $t$ is an atom of $\mu_{1}\boxplus\mu_{2}$, then $F_{\rho_{1}}(t)\in A$. We note for further use an alternative way to write the inequalities defining the sets $B$ and $C$ \cite[Remark 3.7]{huang-wang}. For this purpose, we use the Nevanlinna representation \[ F_{\mu_{1}}(z)=c+z-N_{\lambda}(z),\quad z\in\mathbb{H}, \] where $c\in\mathbb{R}$ and $\lambda$ is a finite Borel measure on $\mathbb{R}$. The inequality in the definition of $B$ can be replaced by \[ \left[\int_{\mathbb{R}}\frac{1+t^{2}}{(F_{\mu_{1}}(\alpha)-t)^{2}}\,d\sigma(t)\right]\left[1+\int_{\mathbb{R}}\frac{1+t^{2}}{(\alpha-t)^{2}}\,d\lambda(t)\right]\le1, \] and the inequality in the definition of $C$ can be replaced by \[ (1+\alpha^{2})\lambda(\{\alpha\})\ge{\rm var}(\mu_{2}). \] In particular, every point $\alpha\in B$ must satisfy \begin{equation} \int_{\mathbb{R}}\frac{1+t^{2}}{(F_{\mu_{1}}(\alpha)-t)^{2}}\,d\sigma(t)<1.\label{eq:property of points in B} \end{equation} It is also the case that $C$ is a discrete subset of $\mathbb{C}$. \begin{thm} \label{thm:cusps on R}Suppose that $\mu_{1},\mu_{2}\in\mathcal{P}_{\mathbb{R}}$ are nondegenerate measures such that $\mu_{2}$ is $\boxplus$-infinitely divisible, and let $\rho_{1}\in\mathcal{P}_{\mathbb{R}}$ satisfy $F_{\mu_{1}\boxplus\mu_{2}}=F_{\mu_{1}}\circ F_{\rho_{1}}.$ Let $I\subset\mathbb{R}$ be an open interval with an endpoint $x_{0}$ such that $F_{\rho_{1}}(x)\in\mathbb{H}$ for every $x\in I$ and $F_{\rho_{1}}(x_{0})$ is real but not an atom of $\mu_{1}$. Denote by $p_{\mu_{1}\boxplus\mu_{2}}$ the density of $\mu_{1}\boxplus\mu_{2}$ relative to Lebesgue measure. Then $p_{\mu_{1}\boxplus\mu_{2}}(x)/|x-x_{0}|^{1/3}$ is bounded for $x\in I$ close to $x_{0}$. \end{thm} \begin{proof} Suppose that \[ c+z+N_{\sigma}(z),\quad z\in\mathbb{H}, \] is the analytic continuation of $F_{\mu_{2}}^{\langle-1\rangle}$ to $\mathbb{H}$, where $c\in\mathbb{R}$ and $\sigma$ is a nonzero (because $\mu_{2}$ is nondegenerate) finite measure on $\mathbb{R}$. As in the case of $\mathbb{R}_{+}$, we can always find finite measures $\sigma'$ and $\sigma''$ on $\mathbb{R}$ such that $\sigma''\ne0$ has compact support and $\sigma=\sigma'+\sigma''$. Define two $\boxplus$-infinitely divisible measures $\mu'_{2},\mu_{2}''\in\mathcal{P}_{\mathbb{R}}$ by specifying that $F_{\mu_{2}'}^{\langle-1\rangle}$ and $F_{\mu_{2}^{\prime\prime}}^{\langle-1\rangle}$ have analytic continuations \[ c+z+N_{\sigma^{\prime}}(z)\text{ and }z+N_{\sigma^{\prime\prime}}(z),\quad z\in\mathbb{H}, \] respectively. Since $\mu_{2}'\boxplus\mu_{2}''=\mu_{2}$, we get $\mu_{1}\boxplus\mu_{2}=\mu_{1}''\boxplus\mu_{2}''$ and $\mu_{1}''=\mu_{1}\boxplus\mu_{2}'$. There exist two $\boxplus$-infinitely divisible measures $\rho_{1}',\rho_{1}''\in\mathcal{P}_{\mathbb{R}}$ such that $F_{\mu_{1}''}=F_{\mu_{1}}\circ F_{\rho_{1}'}$, $F_{\mu_{1}\boxplus\mu_{2}}=F_{\mu_{1}''}\circ F_{\rho_{1}''}$. Clearly, $F_{\rho_{1}}=F_{\rho_{1}'}\circ F_{\rho_{1}''}$, and we argue that $F_{\rho_{1}''}(x_{0})$ is a real number but not an atom of $\mu_{1}''$. Indeed, letting $z\rightarrow x_{0}$ in the inequality \[ \Im F_{\rho_{1}}(z)=\Im(F_{\rho_{1}'}(F_{\rho_{1}''}(z))\ge\Im(F_{\rho_{1}''}(z)),\quad z\in\mathbb{H}, \] the hypothesis $F_{\rho_{1}}(x_{0})\in\mathbb{R}$ shows that $F_{\rho_{1}''}(x_{0})\in\mathbb{R}$. Suppose, to get a contradiction, that $F_{\rho_{1}''}(x_{0})$ is an atom of $\mu_{1}''$. Then, as seen in \cite{BV-reg}, $F_{\rho_{1}'}(F_{\rho_{1}''}(x_{0}))$ is necessarily an atom of $\mu_{1}$, contrary to the hypothesis. The above construction shows that the hypothesis of the theorem also holds with $\mu_{1}'',\mu_{2}''$, and $\rho_{1}''$ in place of $\mu_{1},\mu_{2},$ and $\rho_{1}$, respectively. Moreover the measure $\sigma''$ has a finite second moment. Therefore it suffices to prove the theorem under the additional hypothesis that $\sigma$ has a finite second moment. Under this hypothesis, Lemma \ref{lem:trading convolutions, additive} applies and provides measures $\nu_{1}$ and $\nu_{2}$. Since the set $\{x\in\mathbb{R}:p_{\mu_{1}\boxplus\mu_{2}}(x)>0\}$ is described in terms of the measure $\rho_{1}$, namely, \[ \{x\in\mathbb{R}:p_{\mu_{1}\boxplus\mu_{2}}(x)>0\}=\{x:F_{\rho_{1}}(x)\in\mathbb{H}\}, \] we have \[ \{x\in\mathbb{R}:p_{\mu_{1}\boxplus\mu_{2}}(x)>0\}=\{x\in\mathbb{R}:p_{\nu_{1}\boxplus\nu_{2}}(x)>0\}. \] By Proposition \ref{prop:Biane estimate}, it suffices to show that the ratio $p_{\nu_{1}\boxplus\nu_{2}}(x)/p_{\mu_{1}\boxplus\mu_{2}}(x)$ is bounded away from zero for $x\in I$ close to $x_{0}$. The two densities are evaluated in terms of the values of $G_{\nu_{1}}$ and $G_{\mu_{1}}$ on $\partial\Omega_{\rho_{1}}$: \begin{align*} p_{\nu_{1}\boxplus\nu_{2}}(x) & =-\frac{1}{\pi}\Im G_{\nu_{1}}(F_{\rho_{1}}(x))\\ & =-\frac{1}{\pi\beta}\int_{\mathbb{R}}\Im\left[\frac{G_{\mu_{1}}(F_{\rho_{1}}(x))}{1-tG_{\mu_{1}}(F_{\rho_{1}}(x))}\right](1+t^{2})\,d\sigma(t)\\ & =-\frac{\Im G_{\mu_{1}}(F_{\rho_{1}}(x))}{\pi\beta}\int_{\mathbb{R}}\frac{1+t^{2}}{|1-tG_{\mu_{1}}(F_{\rho_{1}}(x))|^{2}}\,d\sigma(t)\\ & =\frac{p_{\mu_{1}\boxplus\mu_{2}}(x)}{\beta}\int_{\mathbb{R}}\frac{1+t^{2}}{|1-tG_{\mu_{1}}(F_{\rho_{1}}(x))|^{2}}\,d\sigma(t). \end{align*} The hypotheses that $p_{\mu_{1}\boxplus\mu_{2}}(x_{0})=0$ and $F_{\rho_{1}}(x_{0})$ is not an atom of $\mu_{1}$ imply $F_{\rho_{1}}(x_{0})\in B\cup C$. Using the Fatou's lemma, we conclude that \begin{align*} \liminf_{x\to x_{0},x\in I}\frac{p_{\nu_{1}\boxplus\nu_{2}}(x)}{p_{\mu_{1}\boxplus\mu_{2}}(x)} & =\liminf_{x\to x_{0},x\in I}\frac{1}{\beta}\int_{\mathbb{R}}\frac{1+t^{2}}{|1-tG_{\mu_{1}}(F_{\rho_{1}}(x))|^{2}}\,d\sigma(t)\\ & \ge\frac{1}{\beta}\int_{\mathbb{R}}\frac{1+t^{2}}{|1-tG_{\mu_{1}}(F_{\rho_{1}}(x_{0}))|^{2}}\,d\sigma(t)>0, \end{align*} thus finishing the proof. \end{proof} \begin{rem} With the notation of the preceding proof, it is also true that \[ \liminf_{x\to x_{0},x\in I}\frac{p_{\mu_{1}\boxplus\mu_{2}}(x)}{p_{\nu_{1}\boxplus\nu_{2}}(x)}\ge\beta\int_{\mathbb{R}}\frac{d\mu_{1}(t)}{\left(F_{\rho_{1}}(x_{0})-t\right)^{2}}, \] in which the improper integral converges because $F_{\rho_{1}}(x_{0})\in B\cup C$. To verify this, we use the parametrization $\partial\Omega_{\rho_{1}}=\{s+if(s):s\in\mathbb{R}\}$ to write \[ \{F_{\rho_{1}}(x):x\in I\}=\{s+if(s):s\in J\}, \] where $J$ is an interval on which $f$ is positive and it has one endpoint $\alpha=F_{\rho_{1}}(x_{0})\in\mathbb{R}$ such that $f(\alpha)=0$. The fact that $\Im\Psi(s+if(s))=0$ for $s\in J$ yields the equation \[ f(s)+\int_{\mathbb{R}}\frac{\Im G_{\mu_{1}}(s+if(s))}{|1-tG_{\mu_{1}}(s+if(s))|^{2}}\,(1+t^{2})d\sigma(t)=0,\quad s\in J. \] Using this in the above formula for densities, we obtain \[ \frac{p_{\mu_{1}\boxplus\mu_{2}}(x)}{p_{\nu_{1}\boxplus\nu_{2}}(x)}=\beta\frac{-\Im G_{\mu_{1}}(s+if(s))}{f(s)}=\beta\int_{\mathbb{R}}\frac{d\mu_{1}(t)}{(t-s)^{2}+f(s)^{2}}. \] We can now apply the Fatou's lemma as $s\to\alpha$. \end{rem} \begin{rem} The two limits inferior above are actual limits precisely when $F_{\rho_{1}}'(x_{0})=+\infty$. Indeed, as seen above, this condition is equivalent to \[ \left[\int_{\mathbb{R}}\frac{1+t^{2}}{|1-tG_{\mu_{1}}(F_{\rho_{1}}(x_{0}))|^{2}}\,d\sigma(t)\right]\left[\int_{\mathbb{R}}\frac{d\mu_{1}(t)}{(F_{\rho_{1}}(x_{0})-t)^{2}}\right]=1. \] \end{rem} \begin{rem} \label{rem:no exceptions}When $x_{0}$ is assumed to be a zero of the density $p_{\mu_{1}\boxplus\mu_{2}}$, it is easy to see that $F_{\rho_{1}}(x_{0})$ is an atom of $\mu_{1}$ if and only if $F_{\rho_{1}}(x_{0})\in A$. In many cases, the collection $\{x_{0}:F_{\rho_{1}}(x_{0})\text{ is an atom of }\mu_{1}\}$ is empty. This happens, of course, when $\mu_{1}$ has no atoms. This also occurs when \[ \int_{\mathbb{R}}\left\{ 1+\frac{1}{t^{2}}\right\} \,d\sigma(t)\in[1,+\infty]. \] Indeed, in this case the set $A$ is empty (provided, of course, that $\mu_{1}$ is not degenerate and so its atoms cannot have measure $1$). \end{rem} \begin{example} Let $\mu_{1}$ be an arbitrary nondegenerate measure in $\mathcal{P}_{\mathbb{R}}$, and let $\mu_{2}$ be the standard $(0,1)$ normal distribution. It was shown in \cite{B-B-F-S} that $\mu_{2}$ is $\boxplus$-infinitely divisible. We denote by $\sigma$ the associated measure that provides the analytic continuation of $F_{\mu_{2}}^{\langle-1\rangle}$. Since \[ -\Im G_{\mu_{2}}(x)=\pi p_{\mu_{2}}(x)=\sqrt{\frac{\pi}{2}}e^{-x^{2}/2},\quad x\in\mathbb{R}, \] the continuous extension of $F_{\mu_{2}}$ to $\mathbb{R}$ has no zeros and (see \cite[Proposition 5.1]{BWZ-super+}) \[ \int_{\mathbb{R}}\frac{1+t^{2}}{(x-t)^{2}}\,d\sigma(t)>1,\quad x\in\mathbb{R}. \] This inequality, along with (\ref{eq:property of points in B}), implies that $A=B=\varnothing$ for the convolution $\mu_{1}\boxplus\mu_{2}$. Moreover, since $C$ is a discrete set, the measure $\mu_{1}\boxplus\mu_{2}$ is absolutely continuous with support equal to $\mathbb{R}$. If $C$ is not empty and $\alpha\in C$, there is an open interval $I$ centered at $x_{0}=F_{\mu_{2}}^{\langle-1\rangle}(\alpha)$ such that $p_{\mu_{1}\boxplus\mu_{2}}(x)/|x-x_{0}|^{1/3}$ is bounded for $x\in I\backslash\{x_{0}\}$. Suppose, for instance, that $\mu_{1}=\frac{1}{2}(\delta_{1}+\delta_{-1})$ or the absolutely continuous measure with density \[ \frac{15}{16}\left[t^{4}\chi_{[-1,1]}(t)+\frac{1}{t^{4}}\chi_{\mathbb{R}\setminus[-1,1]}(t)\right]. \] In both cases, $\alpha=0$ is the unique solution of the equation $G_{\mu_{1}}(\alpha)=0$ under the constraint \[ \int_{\mathbb{R}}\frac{d\mu_{1}(t)}{(\alpha-t)^{2}}\le1. \] Moreover, we have $F_{\rho_{1}}'(F_{\rho_{1}}^{\langle-1\rangle}(0))=+\infty$ because the equality in the above constraint is achieved and thus, by Remark 9.6, $p_{\mu_{1}\boxplus\mu_{2}}$ is comparable to $p_{\nu_{1}\boxplus\nu_{2}}$ in $I$. To obtain an example in which $F_{\rho_{1}}'(F_{\rho_{1}}^{\langle-1\rangle}(0))$ is finite, one can take $\mu_{1}$ to be the absolutely continuous measure with density \[ \frac{3}{14}\left[t^{2}\chi_{[-1,1]}(t)+\left|t\right|^{-3/2}\chi_{\mathbb{R}\setminus[-1,1]}(t)\right]. \] \end{example}
1110.6096
\section{Introduction} Kohn-Sham density functional theory~\cite{hist-kohnsham} (KS-DFT) has been widely used in electronic structure calculations. Efficient algorithms have been developed that allow KS-DFT methods to be applied for large molecules, including hundreds and even thousands of atoms~\cite{linsca-g}. However, a problem with KS-DFT is the self-interaction error (SIE). For example, it is known that SIE causes severe errors in computed polymer polarizabilities~\cite{SIE-polymer-polarizabilities-2003} where the problem becomes more and more severe with increasing system size. Thus, application of KS-DFT to large systems is not always straightforward. One important application of KS-DFT for large molecules is the study of proteins, whose properties are of interest in biology. In this work, we study the applicability of standard self-consistency based KS-DFT methods for calculations on protein molecules. \section{Method \label{sec:method}} In KS-DFT methods, the electron density is expressed via a set of orbitals in a similar way as in the Hartree-Fock~\cite{book-szabo} (HF) method. We consider here non-periodic spin-restricted KS-DFT methods at zero electronic temperature. Then, the number of occupied orbitals is $n_{occ} = n/2$ where $n$ is the number of electrons in the system. The Kohn-Sham orbitals are determined by solving \begin{equation} \label{eq:ks} \mathbf{F} \mathbf{C} = \mathbf{S} \mathbf{C} \mathbf{\epsilon} \end{equation} where $\mathbf{F}$ is the Kohn-Sham matrix, $\mathbf{C}$ the matrix of orbital coefficients, $\mathbf{S}$ the overlap matrix and $\mathbf{\epsilon}$ the diagonal matrix of orbital energies. The matrices in \eqref{eq:ks} are $N \times N$ matrices, where $N$ is the number of basis functions. Given a set of $N$ orbitals that constitute a solution to \eqref{eq:ks}, a set of occupied orbitals is formed by including the $n_{occ}$ orbitals of lowest energy. The occupied orbitals determine the density matrix $\mathbf{D}$ as \begin{equation} \label{eq:densitymatrix} D_{ij} = 2 \sum_{k=1}^{n_{occ}} C_{ik} C_{jk} \end{equation} where the columns of $\mathbf{C}$ are taken to be ordered by the corresponding orbital energies. The Kohn-Sham matrix $\mathbf{F}$ is computed from $\mathbf{D}$ according to the chosen exchange-correlation functional. Since $\mathbf{F}$ depends on $\mathbf{D}$, an iterative procedure is used to find a self-consistent solution. Calculations where a new density matrix is computed by occupying the orbitals of lowest energy as described above are in this work referred to as \emph{self-consistency based} calculations, to clearly distinguish them from direct minimization approaches. In self-consistency based calculations, convergence schemes such as damping~\cite{damping} and DIIS~\cite{pulay82} are usually employed, where a new Kohn-Sham matrix is constructed by taking information from previous iterations into account. See the work of Kudin and Scuseria \cite{sc-ks} for an overview of such convergence schemes. The self-consistency based approach usually works well provided that there is a sizable gap between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies. However, there is no guarantee that the HOMO-LUMO gap will be large; the gap depends on the studied system as well as on the basis set and on the used exchange-correlation functional. If the gap is very small the procedure of determining the occupied orbitals needed in~\eqref{eq:densitymatrix} becomes ill-defined and then a self-consistent solution may be difficult or even impossible to find. KS-DFT exchange-correlation functionals can be divided into two main classes: pure and hybrid functionals. In hybrid functionals, some fraction of HF exchange is added to the Kohn-Sham matrix, often using empirically determined constants. \section{Results \label{sec:results}} This section includes results of HF and KS-DFT calculations for various protein-like systems. The presented results were computed using the Ergo program~\cite{m-ergo}. The obtained results can however be readily reproduced using other KS-DFT codes employing Gaussian basis functions. The convergence scheme used in the reported calculations is a combination of damping and DIIS, as implemented in the Ergo program. This scheme essentially uses damping in early iterations, with a dynamically adapted damping factor such that the step size is decreased whenever the energy goes up, leading to very small steps in difficult cases. However, details of this scheme do not affect the reported results; the observed convergence problems are due to vanishing HOMO-LUMO gaps, a problem that neither damping nor DIIS-like schemes can resolve. In this work, calculations were considered ``converged'' when the largest absolute matrix element of the matrix commutator $\mathbf{F}\mathbf{D}\mathbf{S}-\mathbf{S}\mathbf{D}\mathbf{F}$ was smaller than $5 \times 10^{-4}$. This particular choice of convergence threshold is however not critical for the reported results, since the calculations that failed to converge due to vanishing gaps were typically very far from reaching this criterion. \subsection{Molecules from the protein data bank \label{sec:pdbdirect}} Table~\ref{tbl:results_631gss} shows computed HOMO-LUMO gaps for a set of protein-like molecular systems with geometries taken from the protein data bank (PDB)~\cite{pdb}. In cases where the PDB file contains more than one structure, the one labeled ``model 1'' was used. The 17 structures in Table~\ref{tbl:results_631gss} were selected in order to give examples of various types of protein-like systems, with the requirement that positions of hydrogen atoms should be included in the PDB file. The net charge of each molecule, shown in the fourth column in Table~\ref{tbl:results_631gss}, was chosen after performing a set of HF/3-21G calculations for different charges. For each system, the charge that gave the largest HOMO-LUMO gap was chosen. Calculations were performed using six different KS-DFT functionals as well as HF. The employed density functionals include the pure functionals LDA (SVWN5), BLYP, and PBE as well as the hybrid functionals B3LYP, PBE0, and BHandHLYP with HF exchange fractions of 20\%, 25\%, and 50\%, respectively. The Gaussian basis set 6-31G** was used. \begin{table} \begin{tabular}{l@{$\quad$}lrr@{$\qquad$}rrrrrrr} \hline \hline & & & & \multicolumn{7}{c}{Computed HOMO-LUMO gap [eV]} \\ \cline{5-11} PDB ID & Type & atoms & charge & HF & BHandHLYP & PBE0 & B3LYP & BLYP & PBE & LDA \\ \hline 2P7R & biosynthetic protein & 73 & 0 & 12.03 & 7.23 & 4.65 & 4.16 & 2.12 & 2.10 & 2.12 \\ 1BFZ & peptide & 87 & 0 & 11.96 & 8.13 & 6.18 & 5.77 & 3.97 & 3.93 & 3.79 \\ 2IGZ & antibiotic & 147 & 0 & 11.81 & 7.99 & 5.70 & 5.27 & 3.27 & 3.23 & 3.12 \\ 1D1E & neuropeptide & 243 & +3 & 10.14 & 6.05 & 3.96 & 3.47 & 1.56 & 1.58 & 1.54 \\ 1SP7 & structural protein & 352 & +3 & 9.13 & 4.06 & 1.65 & 0.87 & & & \\ 1N9U & signaling protein & 182 & 0 & 9.12 & 3.80 & 1.12 & 0.57 & & & \\ 1MZI & viral protein & 225 & -3 & 8.77 & 3.80 & 1.29 & 0.54 & & & \\ 1XT7 & antibiotic & 217 & +1 & 8.51 & 4.85 & 3.32 & 2.65 & 1.02 & 1.24 & 1.30 \\ 1PLW & neuropeptide & 75 & 0 & 7.25 & 2.31 & 0.36 & 0.29 & & & \\ 1FUL & peptide & 135 & -1 & 6.95 & 1.85 & 0.20 & 0.16 & & & \\ 1EDW & peptide & 399 & -1 & 6.89 & 2.02 & 0.26 & 0.21 & & & \\ 1EVC & bacterial toxin & 109 & -2 & 5.82 & 1.14 & 0.30 & 0.24 & & & \\ 1RVS & de novo protein & 172 & 0 & 5.60 & 0.60 & & & & & \\ 2FR9 & peptide toxin & 194 & -2 & 5.48 & 0.55 & 0.26 & 0.21 & & & \\ 2JSI & hormone & 198 & -1 & 5.26 & 0.66 & 0.24 & 0.19 & & & \\ 1LVZ & peptide-binding protein & 185 & 0 & 5.05 & 0.71 & 0.31 & 0.25 & & & \\ 1FDF & signaling protein & 416 & +1 & 3.64 & 0.25 & 0.13 & 0.11 & & & \\ \hline \hline \end{tabular} \caption{Results of HF and KS-DFT calculations using the Ergo program on a set of protein-like molecules. Basis set: 6-31G**. Blank space indicates that no converged result was obtained. \label{tbl:results_631gss}} \end{table} The most important conclusion from the results shown in Table~\ref{tbl:results_631gss} is that in many cases, calculations using pure functionals fail to converge for molecules larger than a few hundred atoms. Note that the blank spaces in the columns for BLYP, PBE, and LDA indicate not only that no gap value was obtained, but that those calculations did not give any meaningful results at all since they failed to converge. For the calculations that did converge, the computed HOMO-LUMO gaps are strongly correlated to the fraction of HF exchange included in the functional, with a large fraction of HF exchange giving a large gap. Thus, for each molecule, the HF method yields the largest HOMO-LUMO gap, while the BHandHLYP functional consistently gives a larger gap than PBE0 and B3LYP. The pure functionals give much smaller gaps; in many cases, no converged results were obtained for the pure functionals due to vanishing HOMO-LUMO gaps. For the 1RVS system, the B3LYP and PBE0 calculations also failed to converge. To check the basis set dependence, calculations with larger basis sets were also performed for the smaller systems. The larger basis set results indicate that the computed HOMO-LUMO gaps are not critically dependent on the basis set. For example, calculations using the cc-pVTZ basis set for the 1BFZ, 1EVC, 1PLW, and 2P7R molecules gave HOMO-LUMO gaps that differed by less than 15\% compared to the 6-31G** results. In some cases, a larger basis set gives a smaller gap. Calculations using the smaller basis set 3-21G were also performed. Those results indicate that already the 3-21G basis set gives similar gaps and convergence behavior for the different functionals. \subsection{Size dependence \label{sec:sizedependence}} The results in Table~\ref{tbl:results_631gss} indicate that the convergence problems due to small gaps are to a large extent system dependent. However, for even smaller protein-like fragments, consisting of only a few amino acids, calculations typically converge without problems also for pure functionals. Therefore there is reason to believe that that the convergence problems increase with increasing molecular size. To further assess the size dependence, calculations were also performed for a sequence of polyproline I helix molecules of increasing length. The model helix geometries were generated using the Gabedit program~\cite{allouche_gabeditgraphical_2011}, applying the ``Build Polypeptide'' function with the ``Polyproline I'' conformation followed by the ``add hydrogens'' command. Computed HOMO-LUMO gaps for the polyproline I helix systems obtained using the KS-DFT functionals BLYP, B3LYP, and BHandHLYP as well as HF are shown in Figure~\ref{fig:polyproline_i_gaps}. The size dependence is clearly seen: for any given functional, the computed HOMO-LUMO gap decreases with increasing helix length, and because the computed gaps for pure functionals are so small, those calculations fail to converge for sizes larger than six proline units. \begin{figure \begin{center} \includegraphics[width=0.49\textwidth]{images/polyproline_i_gap_plot} \end{center} \caption{Computed HOMO-LUMO gaps for polyproline I helix molecules. Basis set: 6-31G**. The BLYP calculations for helices with 7-10 proline units failed to converge. \label{fig:polyproline_i_gaps} } \end{figure} As seen in Figure~\ref{fig:polyproline_i_gaps}, the problem of vanishing HOMO-LUMO gaps is in this case clearly related to the system size. However, the system size is not the only important factor. For example, performing the corresponding test calculations for helices in the polyproline II conformation gives sizable gaps even for very large systems. Apparently the problem of vanishing gaps is not seen for the fairly stretched out polyproline II helices, but the problem does appear for the more compact polyproline I conformation. \subsection{Including solvent water molecules \label{sec:includingsolvent}} The calculations in sections~\ref{sec:pdbdirect} and~\ref{sec:sizedependence} were done for isolated protein-like systems without any surrounding water molecules. This is not completely realistic, since in real biological systems protein molecules are typically dissolved in water, and the solvent water molecules can have a significant effect on both the molecular geometry and the electronic structure of the protein. In this section, we consider the effect of explicitly including solvent water molecules when performing KS-DFT calculations for protein-like systems. Since structures from the PDB in general do not include solvent molecules, model structures including solvent molecules were generated by molecular dynamics (MD) simulations at standard temperature and pressure using the Gromacs program~\cite{gromacs-jctc-2008}. The AMBER03 force field and the TIP3P water model were used. The MD simulations were done with the ``position restraints'' option in the Gromacs program, thus keeping the protein geometry reasonably close to the original geometry from the PDB, but allowing some motion and complete freedom of the surrounding solvent water molecules. MD simulations were done for four of the systems from Section~\ref{sec:pdbdirect}: 1FUL, 1LVZ, 1PLW, and 1RVS. For each of them, a number of MD runs were performed, generating ten uncorrelated MD snapshots. From each snapshot a model system with solvent was created by including all water molecules within 4~{\AA} from the solute. For comparison, corresponding model structures without solvent were also generated for the same set of MD snapshots. The structures without solvent differ slightly from the original PDB structures as the molecules moved during the MD simulations. Figure~\ref{fig:four_pdb_mols_gaps_with_and_without_h2o} shows computed HOMO-LUMO gaps for the model systems generated from MD simulations. To reduce the computational effort, these calculations were done using the 3-21G basis set. Comparisons to larger basis set calculations done for a few cases indicate that the effect of this limited basis set is not critical; qualitatively similar results would probably be obtained with a larger basis set. \begin{figure \begin{center} \subfigure[$\, $ Without surrounding water molecules \label{fig:four_pdb_mols_gaps_without_h2o}]{ \includegraphics[width=0.49\textwidth]{images/gapplot_mdmod0_dist1_0_dist2_00}} \subfigure[$\, $ With surrounding water molecules \label{fig:four_pdb_mols_gaps_with_h2o}]{ \includegraphics[width=0.49\textwidth]{images/gapplot_mdmod0_dist1_4_dist2_04}} \end{center} \caption{Computed HOMO-LUMO gaps for protein-like systems without and with surrounding water molecules. Basis set: 3-21G. Several of the BLYP calculations failed to converge even with surrounding water molecules. \label{fig:four_pdb_mols_gaps_with_and_without_h2o} } \end{figure} Figure~\ref{fig:four_pdb_mols_gaps_without_h2o} shows computed gaps for structures without surrounding solvent molecules. As can be expected from the results of Section~\ref{sec:pdbdirect}, the BLYP calculations here give vanishing gaps and therefore fail to converge. Figure~\ref{fig:four_pdb_mols_gaps_with_h2o} shows that the inclusion of explicit solvent molecules in the calculation in general gives a larger gap. However, in several cases the BLYP calculations still fail to converge. There is some randomness; BLYP calculations may or may not converge depending on the positions of included solvent molecules in that particular MD snapshot. In the test calculations presented in Figure~\ref{fig:four_pdb_mols_gaps_with_h2o}, water molecules up to 4~{\AA} from the solute were included. One may of course include more solvent molecules, but doing so does not seem to solve the problem. In fact, vanishing gaps for pure functionals is a problem also when considering water clusters, as shown in Figure~\ref{fig:h2o_clusters_gaps}. The water cluster geometries were generated by including all water molecules within a certain radius from a snapshot from an MD simulation at standard temperature and pressure. The problem of pure functionals giving vanishing gaps for water clusters was reported previously \cite{sparsity2011}. \begin{figure \begin{center} \includegraphics[width=0.49\textwidth]{images/h2o_clusters_gap_plot} \end{center} \caption{Computed HOMO-LUMO gaps for water clusters. Basis set: 3-21G. The BLYP calculations for water clusters of radius 15-16~{\AA} failed to converge. \label{fig:h2o_clusters_gaps} } \end{figure} The computed gaps in Figure~\ref{fig:h2o_clusters_gaps} decrease rather drastically at 13-16~{\AA} radius, but this is a coincidence for the particular MD snapshot considered here; if continuing to larger clusters using HF or hybrid functionals the gaps tend to stabilize \cite{linmemHF,linmemDFT}. However, pure functionals are not straightforwardly applicable for water clusters generated in this way. Therefore, embedding a protein-like molecule in water by including explicit water molecules up to some radius cannot be expected to solve the convergence problems due to vanishing gaps. In order to achieve converged results with pure functionals, some modeling needs to be done of the other water molecules, outside the domain of the electronic structure calculation, as will be seen in the next section. \subsection{Including point charges representing solvent water molecules outside computational domain \label{sec:withpointcharges}} Previous work by Cabral do Couto et al.~\cite{CabraldoCouto-gaps-brazilian-2004} has shown that for water clusters extracted from a larger simulation, orbital energies are strongly affected by the water molecules surrounding the clusters, and that such surface effects can to some extent be corrected for by including point charges representing the surrounding molecules. Cabral do Couto et al. found that HOMO-LUMO gaps are significantly increased when adding point charges representing surrounding water molecules. In this section, the approach of adding such point charges is applied to the case of protein molecules embedded in water. The test systems used in this section are the same as those in Section~\ref{sec:includingsolvent} except that now water molecules outside the electronic structure calculation domain are included via point charges. These ``outer'' water molecules are not explicitly included in the electronic structure calculation, but they are represented by point charges corresponding to their simple point charge (SPC) distribution. That is, oxygen and hydrogen atoms are represented by point charges of -0.82 and +0.41, respectively. Outer water molecules up to 10~{\AA} away from the studied system were included. This gives a large number of point charges (for 1RVS, around 4800 point charges were used) but the extra computational effort is anyway small since the point charges only affect the core Hamiltonian matrix. The expensive Coulomb, HF exchange, and exchange-correlation parts of the calculation are not affected by the added point charges. Figure~\ref{fig:four_pdb_mols_gaps_with_and_without_h2o_with_spc} shows computed HOMO-LUMO gaps for the same systems as in Figure~\ref{fig:four_pdb_mols_gaps_with_and_without_h2o}, but now including SPC point charges as described above. Note that in the calculations shown in Figure~\ref{fig:four_pdb_mols_gaps_with_h2o_with_spc}, water molecules are included in two ways: water molecules up to 4~{\AA} from the solute are explicitly included in the electronic structure calculation, and additional water molecules between 4 and 14~{\AA} away from the solute are represented by point charges. \begin{figure \begin{center} \subfigure[$\, $ Without surrounding water molecules \label{fig:four_pdb_mols_gaps_without_h2o_with_spc}]{ \includegraphics[width=0.49\textwidth]{images/gapplot_mdmod0_dist1_0_dist2_10}} \subfigure[$\, $ With surrounding water molecules \label{fig:four_pdb_mols_gaps_with_h2o_with_spc}]{ \includegraphics[width=0.49\textwidth]{images/gapplot_mdmod0_dist1_4_dist2_14}} \end{center} \caption{Computed HOMO-LUMO gaps for protein-like systems without and with surrounding water molecules. Basis set: 3-21G. In both cases, water molecules outside the computational domain were represented by SPC point charges. \label{fig:four_pdb_mols_gaps_with_and_without_h2o_with_spc} } \end{figure} Judging from Figure~\ref{fig:four_pdb_mols_gaps_with_and_without_h2o_with_spc}, the approach of including point charges representing water molecules outside the electronic structure calculation domain appears to solve the convergence problems for pure functionals: when point charges are included in this way, BLYP calculations give HOMO-LUMO gaps of more than 0.9~eV in all studied cases. This approach also gives convergence for the polyproline I helix systems considered in Section~\ref{sec:sizedependence}. Thus, it appears that despite the discouraging results of sections~\ref{sec:pdbdirect} and~\ref{sec:sizedependence}, calculations using pure functionals can be done for protein-like systems provided that surrounding solvent water molecules are accounted for by somehow taking their charge distribution into account. If solvent water molecules are explicitly included in the electronic structure calculation, surface effects must anyway be handled by including the charge distribution water molecules further away. In this section, surface effects were handled using point charges in the same way as in the work of Cabral do Couto et al \cite{CabraldoCouto-gaps-brazilian-2004}. This is easily done from an implementation point of view and required only a minor modification to the Ergo program~\cite{m-ergo} that was used to perform the calculations. Another way of taking effects of the surrounding water into account would be to use a polarizable continuum model, although that possibility was not explored here. The point charge embedding approach was here considered as a tool to obtain a converged solution. Of course, such point charges are a very crude approximation of the solvent atoms they are supposed to represent. One should therefore be careful when interpreting results of such calculations, in particular regarding the electronic structure near the boundary where point charges were added. \section{Concluding remarks} All calculations reported in this work were performed using Gaussian basis sets, far from the basis set limit. To better assess the basis set dependence, it would be desirable to also perform calculations with other types of basis sets, e.g. plane waves. The results obtained here for protein fragments are in line with previous findings that pure KS-DFT functionals give vanishing gaps for large polypeptide and water cluster systems~\cite{linmemDFT}. Also, vanishing PBE gaps have been reported for plane-wave calculations on semiconductors~\cite{PhysRevB.81.153203.PBEnogap}. Although the problem of pure KS-DFT functionals underestimating HOMO-LUMO gaps is well known in the literature~\cite{PhysRevLett.51.1884.perdew.gapProblem,PhysRevB.37.10159.sham.gapProblem,PhysRevB.53.3764.levy.gapProblem,salzner-gap-problem-1997,PhysRevB.78.235104.gapProblem,PhysRevLett.105.266802.gapProblem}, to the author's best knowledge the resulting convergence problems in self-consistency based calculations for protein-like molecules has received little, if any, attention previously. It should be noted that the calculations reported in this work were done for finite model systems. That is, periodic boundary conditions were not used. When using a finite model system to describe a protein in water solution, the domain must be truncated somewhere, and it is then important to handle surface effects in some way, for example as described in Section~\ref{sec:withpointcharges}. In a periodic calculation there is no boundary and thus no surface effects to worry about. Periodic calculations using pure KS-DFT for proteins have been reported for example by Sulpizi et al.~\cite{Sulpizi-large-protein-dft-2007}. The calculations in this work were all performed using the self-consistency approach, as described in Section~\ref{sec:method}. Therefore, a non-vanishing HOMO-LUMO gap was here necessary to achieve convergence. It should be noted that other optimization schemes for KS-DFT calculations exist, where a parametrization is used that ensures that the density matrix stays idempotent (and has the correct number of electrons), but where there is no guarantee that the orbitals defining the density are the ones having the lowest orbital energies. Then, a converged solution could in principle be found even if the gap vanishes. However, such approaches were not used in the present work. In the calculations reported in this work, significant effort was made to reduce the risk that the reported results are dependent on any particular choice of starting guess density. For cases that turned out to be difficult to converge, repeated calculations using several different starting guesses were tried, including densities obtained with other functionals and other basis sets. In those cases where ``convergence failure'' is reported, this does not mean only that one particular calculation failed to converge, but that all calculation attempts using various starting guesses failed. All results reported in Section~III are from spin-restricted (closed shell) calculations. Additional spin-unrestricted calculations with different alpha- and beta-spin densities as starting guesses were performed for many of the studied cases. In those cases, spin-unrestricted calculations did not resolve the convergence problems. Test calculations using level shifting~\cite{levelshift} were performed for a few of the difficult cases in sections~\ref{sec:pdbdirect} and~\ref{sec:sizedependence}. If employing a large enough shift, a converged result can sometimes be obtained. However, if the resulting density is used as a starting guess for a calculation without any level shift, different orbitals are occupied and convergence is not obtained. Also, the calculations with level shifting are very sensitive to the starting guess. In cases where the usual self-consistency based approach (without level shifting) fails due to vanishing gaps, calculations employing level shifting may converge to any of many possible final results with small differences in energy, depending on the starting guess. Such solutions found using level shifting do typically not obey the \emph{aufbau} principle; that is, the occupied orbitals are not the ones having the lowest orbital energies. This suggests that proper \emph{aufbau} solutions to the standard Kohn-Sham model may not exist for these cases; compare for example to the case of chromium carbide considered by Kudin et al \cite{kud-scus-cances-scf-2002}. In any case, using level shifting does not seem to be a satisfactory solution to the convergence problems, since the final result then becomes heavily dependent on the starting guess. Another way to achieve convergence in difficult cases would be to employ fractional finite-temperature occupation numbers in the same way as in calculations for metals~\cite{PhysRevLett.79.1337.metals,PhysRevB.79.241103.metals}. Alternatively, instead of standard KS-DFT methods one may consider employing the \emph{extended} Kohn-Sham model~\cite{cances-extended-ks-2001}, or using GW theory~\cite{PhysRevLett.96.226402.GWtheory}. However, application of such methods goes beyond the scope of the present work. Application of self-consistency based pure KS-DFT methods to protein-like molecules without including solvent often leads to convergence problems due to vanishing HOMO-LUMO gaps. Although such problems can be alleviated by including solvent molecules, they indicate that the applicability of such pure KS-DFT methods may be limited: if a protein-like system surrounded by air or vacuum is to be studied, it is unclear to what extent self-consistency based pure KS-DFT methods can be applied. Further investigation of this issue remains a subject of future work. \section*{Acknowledgements} The author wishes to thank E.~R.~Davidson, E.~H.~Rubensson, and P.~Sa{\l}ek for helpful discussions. The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at High Performance Computing Center North (HPC2N) and Uppsala Multidisciplinary Center for Advanced Computational Science (UPPMAX).
1110.6119
\section{Introduction} The ATLAS detector at the Large Hadron Collider (LHC) \cite{Aad:2008zzm} is a complex general purpose particle detector with approximately 100 million readout channels. In common with many modern physics experiments it combines a large number of distinct subcomponents: it features nine major detection technologies and a number of special-purpose systems. The data from specific components may not be usable for physics studies for certain periods of time. For example, a component may be at a non-nominal voltage, readout electronics may need to be reset, or the data may be noisier than usual. These situations arise both from the standard operation procedure and from unexpected failures. Because not all physics studies rely on all components and these issues are often transient, it is desirable to continue data acquisition even in a degraded state. It is also possible for data to be badly calibrated or otherwise not handled properly in the offline reconstruction, although possibly recoverable later using updated software or calibrations. The ability to use more data by ignoring unnecessary components is not a trivial effect: of 1.25 fb$^{-1}$ of data recorded by ATLAS between March and June 2011 at a center of mass energy of 7 TeV, analyses used between 1.04 and 1.21 fb$^{-1}$ depending on which detector components were required. For physics analysis it is essential to know about these degraded conditions and to be able to exclude data from periods where detector problems would affect measurements. Therefore the state of the detector (or the ``data quality'') must be monitored, recorded, and propagated to analysts. This task involves both core data management issues and human interface concerns. The detection of many problems is not fully automated and manual input is required. The opportunity for incorrect data entry or wrong interpretation must be minimized. The final decisions about what data to reject are often made long after the data are recorded, once the impact of various problems is better understood, so maximum flexibility should be a goal. Analysts should be able to access the current best assessment of what data to use easily, while still being able to perform detailed queries on detector status when necessary. A ``flag''-based data quality assessment chain implementation \cite{Adelman:2010zza}, similar in concept to those used in previous and current experiments (for example CMS \cite{Tuura:2010zza}), was in place at the start of ATLAS physics data collection. The main information stored in this system was \textit{decisions} about whether the data recorded at a given time was usable for analysis. This framework was used to produce the physics results of the 2010 data period. However it became apparent that this flag system was inflexible and hard to handle in practice. We therefore replaced this system during the winter 2010-2011 LHC shutdown with a new one where the stored information is the \textit{problems} that might go into making a decision, with the decisions on whether to use the data or not moved to overlying (stored) logic. This seemingly simple change has made the evaluation of data quality at ATLAS much smoother; by tracking issues at a lower level than before, the overall process has been simplified. In this paper we describe the features of the new ``defect''-based system and the improvements made over the flag system. \section{The Data Quality Assessment Infrastructure and Process} In this section we describe aspects of ATLAS experimental operation relevant to data quality monitoring, the basic database framework used for storing data quality information, and the final output of the data quality evaluation process. The fundamental time granularity unit of detector configuration and status accounting in ATLAS is the ``luminosity block'' (LB). These are sequential periods within a run assigned by the trigger hardware and embedded in the data stream for each recorded collision. Their length is flexible (typically one minute long for 2011 data) and certain actions, such as a trigger configuration change request, will cause the start of a new luminosity block. Time-dependent configuration, status, and calibration (``conditions'') information for ATLAS is stored in Oracle and SQLite databases using the COOL technology developed by the LCG project \cite{COOL,Verducci:2008zzb}. A COOL ``folder'' consists of a set of ``channels'' sharing a folder-specific ``payload'' data structure, adapted to the information being stored (such as voltages, beam position, trigger configuration, and so on). Channels have a numeric ID, name, and description associated with them. Payloads can be stored on a channel-by-channel basis for specified ``intervals of validity'' (IOVs). The start and end of an IOV are 63-bit integers, which in ATLAS are used to encode (run, LB) pairs or timestamps. The information stored in COOL databases may be versioned via the ``tag'' mechanism: each tag acts as an independent set of IOVs and payloads for the channels of a folder. Tags can be ``locked'' to prevent their data from being altered and guarantee reproducibility. Data quality information is entered first in the special \texttt{HEAD} tag before being copied to other tags. A typical ATLAS run \cite{Onyisi:2010jm} begins before protons are injected into the LHC and ends after the beams have been removed from the machine. Outside of the ``stable beam'' period, when it is considered safe to run sensitive detectors in data-taking mode, the sensitive detectors are operated in a standby mode with reduced voltages and different readout configurations. During data taking, a number of online applications record the status of the ATLAS detector in the conditions database, including the trigger and data acquisition system (TDAQ) \cite{L1,TDAQ}, the detector control system (DCS) \cite{BarriusoPoy:2008zz}, and the online data quality monitoring framework (DQMF) \cite{Kolos:2008zz,CuencaAlmenar:2011zz}. The events from a specific set of triggers that are useful for detector monitoring are fed into an ``express stream'' which is promptly reconstructed in the ATLAS Tier-0 farm \cite{Elsing:2010zz}. As part of the reconstruction, monitoring plots are produced and distributed, and automated checks are performed on these plots by the offline DQMF \cite{Adelman:2010zza}. Various detector experts and physicist ``shifters'' review the information available to them and provide data quality feedback. They also use information from the monitoring to improve the calibrations used for the reconstruction of events from all triggers that starts 36 hours after the end of a run. Runs sharing similar conditions are grouped into ATLAS run periods and subperiods. Subperiods may be as short as one run, if for example there is a rapid evolution of the LHC beam structure between runs. After a subperiod is closed, it is given an additional review by detector experts, who sign off on the data quality assessment, certifying that all the runs have been inspected and all problems identified. At this point the data are released for analysis. A similar process is used after a reprocessing of previously-taken data with updated software. The main end product of the ATLAS data quality infrastructure is a set of ``good run list'' (GRL) files which contain the list of luminosity blocks approved for analysis. Several GRLs are produced, with different subdetectors required to be good depending on the needs of the corresponding physics studies. These are the final products of the data quality assessment process that are delivered to users, who use the file recommended for their class of analysis. The files use a common ATLAS XML interchange format, which is also used for example by the file provenance metadata architecture and the event-level metadata database \cite{Gallas:2010zz}. \section{Data Quality Databases in 2010 Operation \label{sec:2010}} The data quality databases implemented for 2010 operation \cite{Waller:2010zz} used a flag concept, where several different flag colors were used to reflect detector subcomponent status: green (ok), yellow (caution), red (bad), black (disabled), and grey (undecided). There were $\mathcal{O}(100)$ components to be flagged for every run. As the flags corresponded to specific subcomponents, the list of flags had very few changes after its initial definition. Several COOL folders were used, each containing flags from different sources (online and offline DQMF monitoring, DCS monitoring \cite{Aad:2010zz}, online and offline physicist shifters). Information from the different folders was merged to form the final output, which was primarily based on the flags set by the offline physicist experts and shifters. Flags to be used for analysis were copied to dedicated COOL tags. Several chronic issues were encountered with this system in operation: \begin{enumerate} \item The set of problems that corresponded to each flag and color was not self-documenting. Analysis users were largely unaware of what conditions caused data to be included and excluded from the GRLs and this information was not easy to discover. As multiple problems could result in the same flag color, a lot of training was necessary to ensure that different shifters and experts applied uniform criteria; inevitable personnel change thus posed a long-term consistency concern. \item All issues needed to be reduced within days to a limited and unchanging set of possible flag and color combinations. This required immediate judgment of the likely impact of newly-found problems on physics analysis. Several times, further investigation revealed the initial decisions to be incorrect, requiring retroactive changes to the database. \item Only storing the flag colors meant that a lot of useful information was not preserved. Without resorting to looking at more basic sources (e.g.\ monitoring histograms), detailed information was at best provided in the free-form text comment field of the flag payload. The only way to try to obtain lists of LBs subject to specific issues was to perform a text search, with attendant complications. \item The yellow flag proved troublesome. Instead of only having to define the single green/red boundary, we instead had to define both green/yellow and yellow/red boundaries. In fact, for the COOL tags used to generate analysis GRLs, yellow flags were not permitted, in order to reduce confusion. All yellow flags were required to be ``resolved'' to green or red. The semantics of yellow in the \texttt{HEAD} tag shifted over time from ``caution'' to ``expected recoverable''. As a result the relationship between the flags in the \texttt{HEAD} and analysis COOL tags was often not obvious. \item There was no single authoritative list of data quality flags. Lists were hard coded in several locations and adding a channel required a new ATLAS software release (and caused forward compatibility problems with older releases). \end{enumerate} It was decided to develop and implement an alternative system to address these difficulties. \begin{figure} \includegraphics[width=\linewidth]{defect_behavior} \caption{\label{fig:virtualdefect}A demonstration of how information is propagated from primary to virtual defects. A simplified set of defects is shown, along with their states for various luminosity blocks during a run. Shaded boxes indicate luminosity blocks in which the primary or virtual defect is reported to be present and corresponding events are to be rejected. An analysis would depend only on the Electron virtual defect, only referring to ``deeper'' defects if it had unusual requirements.} \end{figure} \begin{figure*} \includegraphics[height=3.9cm]{atlas_flow_old}\hfill% \includegraphics[height=3.9cm]{atlas_flow_new} \caption{\label{fig:comparison}A comparison of the information flow from data taking to physics analysis for the flag system used in 2010 data (left) and the defect system of 2011 data (right). The final output used for constructing good run lists is in the bottom right in both cases. The defect system is less complex than the flag system. Some flags are still present in 2011 operation to ease the transition, but their use is deprecated.} \end{figure*} \section{Concepts of the Defect Database} A ``defect'' is a deviation from a nominal detector condition. A defect is either present or absent for a given luminosity block. An arbitrary number of defects may be defined. A defect may be explicitly stored in a database or be computed on retrieval. Defects whose values are stored in the database are referred to as ``primary defects'' to distinguish them from ``virtual defects,'' which are defined combinations of primary defects or other virtual defects and only computed on access. Primary defects are those that are input to the system on a day-to-day basis, while virtual defect definitions evolve much more slowly. A virtual defect is specified by the other defects (primary or virtual) that it depends on. If any of its dependencies are present, a virtual defect is present for a luminosity block (the presence of primary and virtual defects has the same semantics). Virtual defects are used to combine primary defects into higher level concepts; for example, all muon trigger defects that are serious enough to exclude data from use are combined in a single virtual defect. The main purpose of virtual defects is to simplify defect database queries and to encapsulate the current best understanding of which primary defects correspond to problems where the corresponding data should not be used in physics analyses. A demonstration of virtual defect logic is shown in Figure~\ref{fig:virtualdefect}. A similar ``virtual flag'' concept existed for the flag system, but the combination logic was more complicated as flags had more possible states. The values of the primary defects and the definitions of the virtual defects are stored and versioned with the COOL tag mechanism. This ensures the reproducibility of database queries, while allowing defect values and virtual defect definitions to evolve as necessary. Within a single COOL tag, a virtual defect has a constant definition for all runs. The virtual defect definitions can be updated independently of the primary defect information as the understanding of the effect of detector problems improves. Because of this both the relevant primary and virtual defect tags must be specified during a retrieval. The flag system had a number of different parallel COOL folders storing information from different sources, which were merged to determine the final flags. We considered this unnecessary for the defect database, as any given defect should either be reliably automatically detected, or require manual input. There is therefore only one production instance of the defect database, filled both by people and software, and no merging steps are required. We emphasize that a defect need not be so serious as to cause data not to be used in analysis; it may serve as an issue tracking mechanism, or be mainly of interest for checks of possible systematic effects. It is also possible to ignore specific primary defects during the virtual defect computation, again to facilitate studies of systematic uncertainties. The defects carry some metadata with every entry, including a comment, the username of the person or ID of the automated process that filled the entry, and whether the problem is likely to be recovered later. The defect database concept addresses the concerns of Section~\ref{sec:2010} as follows: \begin{enumerate} \item There is one defect for each class of problem. The meaning of the defect is explained in the description field of the defect; if this is done clearly enough there should be no ambiguity. \item A new type of problem immediately gets a new defect. Its effect on the GRLs is handled by the virtual defects, which can be updated when a fuller picture of the impact of the problem is obtained. It is also not necessary to anticipate all problems in advance, as defects can be added as problems occur. \item All the information that was used to make decisions with the flag system is now explicitly available and easy to query. In particular, it is simple to determine the set of all data in which a defect was present. \item The stored information is binary (a defect is either present or absent). The ``expected recoverable'' meaning of the yellow flag is provided by a Boolean field in the defect. As there is no longer a resolution process required, making a COOL tag of the defects to be used to generate good run lists is as simple as copying the \texttt{HEAD} information. \item The defect database is self-describing. It was an explicit design requirement that the access application programming interface (API) should not add additional information beyond that in the database. \end{enumerate} \section{Implementation of the Defect Database} The defect database is implemented with two COOL folders, one for the primary defect data and the other for the virtual defect definitions. These two folders are versioned independently but their COOL tags can be tied together with the ``hierarchical tag'' mechanism, meaning only a single tag needs to be presented to the analysis users. As an optimization to cope with the large number of expected defect channels, the absence of any data for a defect for an interval of validity is considered equivalent to an absent defect. This optimization means that not only is the database smaller, but the demands on the shifters are reduced as well since they do not have to explicitly mark good data. A single API, written in Python, has been created that covers the vast majority of defect database creation, filling, query, and manipulation needs. The Python library is implemented in 1.3 thousand lines of code (kloc). An extensive suite of tests using the \texttt{nose} package \cite{nose} is run nightly to ensure that the library conforms to specifications. As the specifications were clearly defined before the package was written, a test-driven process allowed rapid development over a few days with confidence in code correctness. The API enforces certain validity conditions for input (e.g.\ virtual defects should only reference existing primary and virtual defects) and is the only approved input method for the defect database. For use in event reconstruction, the standard ATLAS Athena \cite{Calafiura:2005zz} C++ interface library is used to directly access the database. As the user interface software needed to be rewritten to handle the new defect system, we decided to take advantage of new Web 2.0 technologies to provide a more intuitive and responsive web application than the one previously used for the flag database. The new shifter application consists of 0.4 kloc of backend Python code running in a CherryPy web application server and 1 kloc of client-side Javascript using the Google Closure framework, replacing the 5.3 kloc of PHP code comprising the old application. The fact that the defect database is the authoritative source of all information concerning defects allows the creation of a single administrative web interface for defect management. This interface allows defect creation, virtual defect creation and definition editing, and tag creation and updating. This application, hosted in the same server process as the shifter application, consists of 0.4 kloc of backend Python code and 0.8 kloc of client-side Javascript. There was no similar interface for the flag system. \sloppy Several defects not corresponding to detector problems have been added for bookkeeping purposes. A \texttt{NOTCONSIDERED} defect was initially set present for all luminosity blocks, and is then set absent for the LBs comprising a run when that run is reviewed by the data quality group. Due to the convention that the absence of defects indicates that there is no problem, a guard defect like this is necessary to avoid including runs in GRLs that are not yet reviewed. In addition, a set of \texttt{UNCHECKED} defects were created that serve as workflow management markers. These defects are all automatically set present when a data-taking run completes, and are unset by the shifter signoff procedure. Virtual defects that depend on the \texttt{UNCHECKED} defects will therefore reject data until the shifters and experts have reviewed it. The administrative interface will not permit the generation of official good run lists for a run period if any \texttt{UNCHECKED} defects are present. \fussy When transitioning from the flag system, we wanted to ensure minimal disruption to downstream consumers of data quality information. The interface between the data quality database and the users lies primarily in the GRL generation mechanism. We created new virtual defects with the same names as the old flags and grouped the new primary defects under these virtual defects. The non-green flags from 2010 data were also imported as defects. (A full retroactive filling of 2011 defects for 2010 was considered impractical.) We were largely able to avoid changes to the GRL generation configurations and retain the ability to generate GRLs for 2010 data with the defect database. A comparison of the information flow in the flag and defect database systems is shown in Figure~\ref{fig:comparison}. Some of the flag system COOL folders are still being filled, but now have no direct impact on GRL creation. As more confidence is gained with automatic detection of various problems, the relevant information is written directly into the defect database as well (implemented so far for portions of the DCS and offline DQMF information). \section{Operation of the Defect Database} The defect database has been used for the 2011 running. Integration into the data quality assessment workflow was smooth and user feedback very positive. As anticipated, new detector problems are entered into the database immediately, allowing their physics impact to be studied at a more relaxed pace while maintaining clear documentation of the affected data. Anecdotal evidence suggests that the frequency of user input errors has been reduced substantially, and that the removal of the resolution phase when preparing COOL tags for analysis has reduced turnaround time allowing data analysis to begin sooner. Care must be taken to avoid creating duplicate defects; this is achieved by restricting defect creation to a small set of experts. As of the accumulation of 1.25 fb$^{-1}$ of data in June 2011, there were 619 defects and 172 virtual defects defined. Including all COOL tags, the database contains approximately 33 MB of data, which promises good scalability for the future. Figure~\ref{fig:iovs} shows the mean number of intervals of validity per run (of whatever length) defined for primary defects in runs available for physics analysis at 7 TeV center of mass energy between March and June 2011; this corresponds to the number of rows that are inserted into the database. Most defects are rare and occur much less often than once per run. The defects reflecting when various components are in a standby state create the peak at 2 IOVs per run. There are a few defects that occur quite often, which reflect frequent but short (i.e.\ single LB) detector problems. Querying the database is quite fast. For example, querying all defects and virtual defects for the 1.25 fb$^{-1}$ of data recorded through June 2011 using the Python API takes less than 40 seconds, including the virtual defect computation. A single virtual defect, such as the barrel electron quality, takes under five seconds. To retrieve the full set of primary defects takes under a second, including database connection setup time. \section{Conclusion} The ATLAS experiment requires stringent documentation and tracking of detector problems that affect the usability of data for analysis. We have implemented a ``defect database'' system that allows straightforward entry and retrieval of specific types of problems, as well as combinatoric logic to determine which data should not be used for analysis due to specified issues. We have demonstrated that such relatively low-level issue tracking is practical even for an experiment of the complexity of ATLAS, and in fact more successful than storing only coarse decisions on the usability of data. \begin{figure} \includegraphics[width=\linewidth]{meandefects} \caption{\label{fig:iovs}A histogram of the mean number of occurrences (IOVs) recorded for each defect in runs available for physics analysis at 7 TeV center of mass energy between March and June 2011. The peak near 2 occurrences per run is due to detector components being in standby at the start and end of runs. ``Intolerable'' defects are those which will cause at least one analysis to reject the affected data.} \end{figure} \begin{acknowledgement} We thank our colleagues in ATLAS for their suggestions, encouragement, and cooperation during the construction of the defect system. This work was supported by the U.S.\ National Science Foundation and the U.K.\ Science and Technology Facilities Council. P.U.E.O.\ was partly supported by a Fermi Fellowship from the University of Chicago. \end{acknowledgement} \bibliographystyle{apsrev}
1305.2134
\section{Results and Discussion} Similar to Bi$_2$Se$_3$ and Bi$_2$Te$_3$, Bi$_2$Te$_2$Se forms a rhombohedral crystal structure with the space group $D_{3d}^{5}$ ($R\bar3m)$, with the basis quintuple layer (QL) unit of Te-Bi-Se-Bi-Te, as depicted in Fig.~1(a). Inside the QL the bonds are predominantly ionic-covalent, and adjacent QLs are bound by van der Waals forces. Figure~1(b) is the atomic-resolution image ($-50$~mV, $0.12$~nA) of the Bi$_2$Te$_2$Se surface area of 5~nm$\times 5$~nm. The scanning tunneling spectrum gives a measure of the local density of states near the Fermi energy as shown in Fig.~1(c). The resulting STS data were averaged over 10 spectra to improve statistics. The dashed lines show approximate energy locations of the top of the bulk valence band (BVB), Dirac point (DP) and the bottom of bulk conduction band (BCB) around the $\bar{\Gamma}$ point. Figure~1(d) depicts the surface state energy dispersion of Bi$_2$Te$_2$Se measured by ARPES at the photon energy of $h\nu=30$~eV [open circles indicate the band dispersion by our {\it ab initio} calculation (see Fig.~3) shifted downward by 0.24~eV to match the measured Dirac point position]. The DP energies from ARPES and STS spectra are equal~\cite{Jia12}. Figure~2(a) shows the differential conductance ($dI/dV$) map at a bias voltage of $V_{s}=+750$~mV. It exhibits a standing wave spreading anisotropically around point defects. (All the spectroscopic maps at the bias voltages from $+50$ to $+1250$~mV were obtained for the same surface without changing any other experimental parameters.) In order to get the momentum space information and obtain the scattering wave vectors, we have performed Fast Fourier Transformation (FFT) of the $dI/dV$ maps, see Figs.~2(b)--2(l). These scattering images provide information on bias-dependent quasiparticle interference. For bias voltages below $+300$~mV the interference effect around the point defects is weak, and the FFT image shows a circular pattern with small $\mathbf{q}$ vectors, which mainly come from the statistical noise. At $V_{s}=+400$~mV, flower shaped patterns emerge [Fig.~2(b)] with six broad petals along $\bar{\Gamma}\bar{M}$. Note that the pattern becomes sharp and intensive at bias voltages between $+550$ and $+850$~mV. Starting with $V_{s}=+950$~mV, with increasing the bias voltage the spots get gradually broader. The evolution of the scattering vectors with $V_{s}$ is visualized by the FFT power profiles in Figs.~2(m) and 2(n). In Fig.~2(m), in the $\bar{\Gamma}\bar{M}$ direction (rightwards) the scattering vectors become larger as $V_{s}$ increases, while there is practically no scattering along $\bar{\Gamma}\bar{K}$ (leftwards). In Fig.~2(n) we show the ratio of the intensity profiles along $\bar{\Gamma}\bar{M}$ and $\bar{\Gamma}\bar{K}$. The intensities ratio damps the background and makes the scattering more clear: we distinctly see the dispersion of $q$ with the bias voltage. \begin{figure*} \includegraphics{FIG3_Munisa_low.eps \caption{ (Color online) (a) First principles electronic structure of a 7 formula units slab of Bi$_{2}$Te$_{2}$Se. (b)-(c) The depth-momentum distribution of the charge density for topological surface state (TSS) and inner surface state (ISS). In view of the finite thickness of the slab shown is the sum of the densities of the degenerate pair of the surface states located on the opposite surfaces of the slab. (d-f) Spatially resolved Fermi surfaces. The color scale shows the constant energy cuts of the surface spectral function $ N(z,k_{\parallel})$. (g-i) Distribution of the spin polarization perpendicular to the surface, red and blue circles denote positive and negative spin polarization and their sizes represent the magnitudes of spin polarization. The spin polarization values are shown at some specific points. } \end{figure*} In order to elucidate the origin of scattering pattern and the effect of the helical spin texture of the TSS, we have performed a first-principles calculation of the electronic structure of a 7 formula units slab of Bi$_{2}$Te$_{2}$Se~\cite{method}. Figure~3(a) shows the band structure along $\bar{\Gamma}\bar{K}$ (leftwards) and $\bar{\Gamma}\bar{M}$ (rightwards). The magenta arrows show the energy and momentum ranges of the TSS, and the green arrows indicate the range of the inner surface state (ISS), which splits off from the top of the conduction band. Here, the DP is localized 0.065~eV below the calculated Fermi energy, i.e., the experimental energy scale is shifted by 0.24~eV relative to the theoretical scale. Figures~3(b) and 3(c) show the depth-momentum distribution (in the $\bar{\Gamma}\bar{K}$ direction) of the charge density $\rho(z,k_{\parallel})$ for the upper-cone TSS [Fig.~3(b)] and for the ISS [Fig.~3(c)]. The upper-cone surface state exists up to $k_{\parallel}=0.22$~\AA$^{-1}$, and the ISS between 0.08 and 0.2~\AA$^{-1}$. Figures~3(d)--3(f) show calculated momentum distributions of the spatially-resolved spectral density $N(E,{\mathbf k_\parallel})$ at three constant energies $E$. The function is defined as a sum over all (discrete) states $\lambda$ with energy $E$ and Bloch vector $\mathbf k_\parallel$ weighted with the probability $Q_{\lambda{\mathbf k_\parallel}}$ of finding the electron in this state in the surface region: $N(E,{\mathbf k_\parallel})= \sum_{\lambda}Q_{\lambda {\mathbf k_\parallel}}\delta(E_{\lambda {\mathbf k_\parallel}}-E)$. (For the sake of presentation, the $\delta$ function is replaced by a Gaussian of 0.05~eV full width at half maximum.) The integral $Q_{\lambda \mathbf k_\parallel}= \int\! |\psi_{\lambda {\mathbf k_\parallel}}(\mathbf r)|^2d{\mathbf r}$ over the surface region comprises two outermost atomic layers and vacuum. The angular distribution of the spin polarization perpendicular to the surface for the two surface states, TSS and ISS, is shown in Figs.~3(g)--3(i). Here the net spin density is integrated over a half of the slab, and the net spin is normalized to the electron charge in the integration region. The TSS is somewhat stronger localized than the ISS [cf. Figs.~3(b) and 3(c)], and it exhibits a higher out-of-plane spin polarization. It is most interesting that the magnitude of the out-of-plane spin polarization of the TSS may be as large as 55\%. The bias-dependent quasiparticle scattering is characterized by scattering vectors that connect the $\mathbf{k}$ vectors of the initial and final scattering states at the CEC. Figure~4(a) shows schematic CECs of the TSS. Three characteristic scattering vectors denoted as $\mathbf{q}_{1}$, $\mathbf{q}_{2}$, and $\mathbf{q}_{3}$ explain the features in Fig.~4(b). The most intense croissant-shaped features can only be explained by $\mathbf{q}_{2}$ and $\mathbf{q}_{2}^{\prime}$, which connect two flat segments of the contour as shown in Fig.~4(a). Other scattering features along the $\bar{\Gamma}\bar{M}$ direction characterized by $\mathbf{q}_{1}$ and $\mathbf{q}_{3}$, can also be explained as due to the warping of TSS. The scattering originating from ISS can be excluded because its CEC has no parallel fragments to cause a large joint density of states, and its convex shape does not lead to the croissant-shaped structures. To clarify the relation between experimental and theoretical results, the intensity maxima of the FFT power profiles in the $\bar{\Gamma}\bar{M}$ direction in Fig.~2(m) are compared with the $q$ values extracted from the slab calculation, see Fig.~4(c). By shifting the calculated points upward by 0.1~eV we were able to reproduce all the experimentally observed scattering features. (A discrepancy of the same order between the two photon photoemission measurements of the unoccupied Dirac cone and calculations was reported in Ref.~[12].) \begin{figure} \includegraphics{FIG4_Munisa_low.eps \caption{ (Color online)(a) Schematic CEC with possible scattering vectors and (b) experimental FFT image at a bias voltage of $+750$~mV. The $\bar{\Gamma}\bar{M}$ direction is along the $x$-axis. (c) Dispersion of three scattering vectors from STM (filled symbols) and slab calculation (open symbols).} \end{figure} The presence of the FFT features in the $\bar{\Gamma}\bar{M}$ direction and their absence in the $\bar{\Gamma}\bar{K}$ direction tells us that the scattering is strongly spin selective. This scattering scenario holds for the whole energy interval from $+300$ to $+1000$~mV above the Fermi energy, and no significant surface to bulk scattering is observed, in contrast to Bi$_2$Se$_3$, for which a bulk-related scattering has been reported~\cite{Kim11}. This indicates that a coupling of the TSS with the bulk continuum states is negligible even in the unoccupied region, which energetically overlaps with the bulk conduction band. In conclusion, our scanning tunneling microscopy/spectroscopy experiment and the first-principles calculation of Bi$_{2}$Te$_{2}$Se reveal a scattering pattern that originates from the strongly warped constant energy contours of the topological surface state with substantial out-of-plane spin polarization. The topological surface state is thus found to survive up to energies far above the Dirac point. This finding provides a deeper understanding of optically excited spin and charge dynamics at the surface of topological insulators. STM and ARPES measurements were performed with the approval of the Proposal Assessing Committee of HSRC (Proposal No.11-B-40, No.10-A-32). This work was financially supported by KAKENHI (Grant No. 20340092, 23340105), Grant-in-Aid for Scientific Research (B) of JSPS and by RFBR, research project No. 13-02-92105 a. The authors acknowledge partial support from the Spanish Ministerio de Ciencia e Innovaci\'on (Grant No. FIS2010-19609-C02-02).
1810.06731
\section{Introduction} In theoretical and experimental physics, are common practice to do linear approximations to solve non-linear problems. These approximations have given satisfactory results, given that until certain margin of error, theoretical analysis and linear numerical methods agree with observed experimental findings\cite{Mantegna2000}. However, measurements are subject to the same linear approximations as numerical methods which requires the validity of current models. Often, experimental data which does not match with known models, is considered to be subject to noise and eliminated from the average trend of the data. This deletion of the noisy data is done due to the complexity of noise and limited theoretical tools to analyze them. In addition, models and experimental devices are constrained to implicit linearizations.\\ \\ Real physical phenomenas are intrinsically non-linear and their dynamics modulated by a noisy environment. An example is the nervous system, a highly non-linear system which is embedded in a noisy biological environment dependent on multiple interactions, for example, channel opening which is only triggered precise stimulus like voltage, pH or the binding of a ligand \cite{Zhou2010}. The nervous system is also a detection device whose input signals are strongly immersed in noise, that comes from the external environment. The nervous system is acting as a decoder of noisy input output information by computing small fluctuating noisy conditions. Enhancement and amplification of those fluctuations is a relevant candidate to explain perception \cite{bulsara1996tuning}. Hence, it is interesting to study Stochastic Resonance (SR), a phenomena property of non-linear systems. SR enabled to have computation with small fluctuating signals. There is also evidence of the role of SR in the functioning of the brain for the detection of weak signals, synchronisation and coherence in neural connections, synapses and behavior in general \cite{gammaitoni1998stochasticLUCA}.\\ \\ For the first time, the SR model was proposed and numerically stimulated by Roberto Benzi\cite{benzi1981mechanism}. Moreover, Luca Gammaitoni studied the SR model and presented the detailed theory behind the stochastic resonance\cite{gammaitoni1998stochasticLUCA}. A special case of SR systems which is called \emph{Ghost Stochastic Resonance} was discovered by Oscar Calvo\cite{calvo2006ghostcircuit}. It occurs for the maximum resonance frequency where the input energy dissapears. SR is completely different from the resonance observed in linear systems. The most significant difference is the resonance frequency. The resonance frequency is obtained according to the periodic input signal in SR models, while it depends on the structural properties of the system in linear resonance models.\\ \\ There are analytical limitations to the understanding of the dependence of the stochastic resonance, on the parameters of the periodic input perturbation, the noise and the physics of the system. The ammount and dimensionality of these stochastic differential equations and non-linear equations becomes intractable and unsolvable. However, the experimental technique proposed in this research is a powerful tool to better understand and implement SR.\\ \\ SR models can be exploited in many physical systems such as lasers, SQUID and neural networks. In SR models, adding noise to the input stimulus in the sub-threshold regime can enhance the information sensing process in sensory systems. This is a remarkable property of SR models \cite{moss2004SRreviewapplications}. One of the important applications of SR models based on this property, is for electro-optical devices to enhance their measurement for data acquisition from small signals, which are embedded in noise. An example of this application was done by Bruno Ando \cite{ando1998threshold}. Another application of SR models is analizing them to enhance neural signals. A pioneer in this endeavour is Frank Moss \cite{moss2004SRreviewapplications}, who showed that SR models agree with the neuron models and its properties. He expressed that the sources of noise can be important for coherence in the brain.\\ \\ Non-linear systems are composed of three main components, i.e., a threshold value , a weak input signal to stimulate the system in the sub-threshold regime and a noise signal. These components are always found in nature and their interactions facilitate SR. According to the stochastic differential equations that model the phenomena, it can be concluded that SR depends on the subthreshold periodic input signal, the noise signal and the system structure; however, there is no analytical solution for SR models. Hence, numerical simulations are used to obtain the solution of these models.SR models have an intrinsic error. This error originates from selecting the threshold value and it can cause serious complications during measurements. Hence, this error should be reduced through optimizing SR models \cite{ando1998threshold}. The following methods are employed in order to optimizing SR models: \begin{itemize} \item The Inter-Spike Interval Histogram (ISIH) analysis \item The Signal to Noise Ratio (SNR) analysis \end{itemize} They obtain the optimum points for SR models when the noise value is given. By using these optimum points for the SR model, the input signal is preserved and amplified. On the other hand, different types of noises, i.e., white and color noises can affect non-linear phenomena, differently. Thus, finding type and amplitude of the noise signal which maximizes the performance of the SR model to amplify the input signal is important.\\ \\ According to biological studies, SR plays an important role in the functioning of the brain for detecting weak input signals and synchronization of neural connections \cite{gammaitoni1989stochastic}. It is possible to see the phenomena in neurons, given that its function in the brain is the integration and information processing of electrochemical signals. This constant activity at the interior of the brain creates a background activity, due to the neuronal pulse and the signal transmission that causes fluctuations in the membrane potential creating a real source of noise \cite{reinker2004stochasticphd}. This noise is the responsible for signal amplification by SR.\\ \\ The purpose of this paper it to obtain an efficient SR model for neurons in the presence of different types of noises. This model can amplify the weak input signal efficiently for easier detection. Thus, the main contributions of this research are: \begin{itemize} \item Investigate the non-linear behavior of neurons can be modeled by SR models. Then SR model for an artificial neuron is employed in order to analyze conveying information of the weak input signal. In spite of using artificial neurons, our long term objective is analyzing biological neurons. \item Parameter optimization for the neuron model in order to amplify the weak input signal in the presence of different types of noises, i.e., white noise and color noise. \item Finally, it is shown in SR models, that pink noise is twenty times more optimal than white noise to amplify the weak input signal. \end{itemize} The rest of this paper is organized as follows. In Section 2, the theoretical background of the SR model of the required constraints in order to detect the weak input signal are explained. Moreover, two methods for analyzing the output of the SR model are introduced. In Section 3, the detailed methodologies to develop the neural network architecture from electrical devices is presented together with the structure of the pink noise generator. In Section 4, the results of the power spectrum analysis and signal to noise ratio analysis are used to compare the efficiency of different types of noises. Finally, Section 5 contains the concluding remarks. \section{Theoretical background} The SR model can be developed through adding a weak periodic force, the variation of a bistable potential function, and a noise signal to the components of the Brownian motion model \cite{Allison2003}. Hence, the SR model can be described as \cite{gammaitoni1998stochasticLUCA} \begin{equation} m\frac{d^2x}{dt^2}=F(t)-b\frac{dx}{dt}-\frac{dV(x)}{dx}+\xi(t), \label{eqn:SRDiffEq} \end{equation} where $F(t)$ is the periodic force, $\xi(t)$ is the noise signal and $V(x)$ is the bistable potential which is defined as \cite{ando1998threshold} \begin{equation} V(x)=-a\frac{x^2}{2}+b\frac{x^4}{2}, \label{eqn:Pot} \end{equation} where $a$ and $b$ are constants and they can modify the shape of potential as shown in Fig. \ref{f:Potencial}. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.2\textheight]{biestable.pdf} \caption{Bistable potential which is used in SR model.} \label{f:Potencial} \end{figure} The constants $a$ and $b$ are related with the potential function characteristics as \begin{equation} x_m=\sqrt{\frac{a}{b}}, \,\,\,\,\,\,\,\,\, \Delta V=\frac{a^2}{4b}, \label{eqn:PotPar} \end{equation} where $x_m$ and $\Delta V$ are shown in Fig. \ref{f:Potencial}. A weak periodic signal as the input of the SR model is introduced. It is assumed that the amplitude of this signal is not sufficient to move particle from one side of the bistable potential to the other side as shown in Fig. \ref{f:Sdebil}. \begin{figure}[t!] \includegraphics[width=.5\textwidth,height=.4\textheight]{FeebEff} \caption{a) A weak periodic signal to move the particle. b) The amplitude of this signal is not sufficient to move particle from one side of the bistable potential to the other side.} \label{f:Sdebil} \end{figure} \begin{figure}[t!] \includegraphics[width=.5\textwidth,height=.4\textheight]{effNsJ} \caption{a) A noise signal is added to the weak signal. b) The particle can cross from one side of the bistable potential to the other side.} \label{f:SdebilNs} \end{figure} In addition to the weak periodic signal, a noise signal is added to the system. The noise signal enables the particle to cross form one side of the potential to the other side as shown in Fig. \ref{f:SdebilNs}. \begin{figure}[t!] \includegraphics[width=.5\textwidth,height=.3\textheight]{Resps} \caption{Response of the system: (a) Noise amplitude is weak, (b) the number of crosses from one side of the potential to the other side is low. (c) Noise amplitude is optimum, (d) the crosses from one side of the potential to the other side is synchronized with the weak input signal. (e) Noise amplitude is too high, (f) the crosses are so frequent, and thus, the information of the input signal is lost.} \label{f:Resps} \end{figure} Hence, the output of system depends on the noise intensity. When the amplitude of the noise signal is low as Fig. \ref{f:Resps} (a), the rate of moving of article is low as shown in Fig. \ref{f:Resps} (b). On the other hand, when the noise amplitude is high as Fig. \ref{f:Resps} (e), the particle moves form one side to the other side frequently. Thus, in these situations, the input signal is lost. However, when the noise intensity is optimal as in Fig. \ref{f:Resps} (c), the crossing of particle is synchronized with the weak periodic signal as shown in Fig. \ref{f:Resps} (d). The SR model works perfectly in this scenarios. \begin{figure}[t!] \includegraphics[width=.5\textwidth,height=.25\textheight]{Spikes.pdf} \caption{ The output of an artificial neuron. The inactivity intervals are measured and saved to analyze.} \label{f:Spikes} \end{figure} Variations of the number of received neuro-transmitters cause fluctuations in the membrane potential and create a source of noise \cite{reinker2004stochasticphd} which can be responsible for signal amplification in the SR model. The output signal of a neuron is a train of Dirac delta functions as shown in Fig. \ref{f:Spikes}. One way to analyze the SR model in a neuron is measuring the inactivity intervals between Dirac delta functions in the response of a neuron. Then, the collected responses from different neurons are saved for histogram generation \cite{reinker2004stochasticphd}. \begin{figure}[t!] \includegraphics[width=.5\textwidth,height=.23\textheight]{ISIHt.pdf} \caption{Histogram of the inactivity intervals in the output spike train proportional to the input signal's period.} \label{f:ISIHt} \end{figure} Fig. \ref{f:ISIHt} shows the histogram of the time intervals. Variations of the noise signal modify the histogram. When the noise amplitude is low, the inactivity intervals are too long and the majority of intervals are longer than the signal's period. On the other hand, when the intensity of noise signal is high, the inactivity intervals are too short and the majority of intervals are shorter than the signal's period. However, when the amplitude of the noise signal is optimum, the inactivity intervals are equivalent to the times corresponding to the multiples of the input signal period as shown in Fig. \ref{f:ISIHt}. Thus, the SR model is designed based on the maximum value corresponding to the signal's period. The peak of histogram with respect to the noise amplitude is studied, and thus, the optimum intensity of the noise signal for the SR model is obtained. This scheme is called ISIH analysis.\\ \\ Another useful method to quantify SR model is SNR analysis which employs Fourier transform. The Fourier transform of the output signal shows multiple peaks at the multiples of input signal's frequency value. In this method, the peak corresponding to the input frequency is integrated according to the expression in (\ref{eqn:SNRint}) \cite{gammaitoni1998stochasticLUCA} for different noise amplitudes. Then, the derived values is plotted with respect to the noise amplitude to obtain the optimum intensity of the noise signal for the SR model. \begin{equation} \text{SNR}=\left[\displaystyle\lim_{\Delta \omega \to{0}}{ \displaystyle\int_{\Omega-\Delta\omega}^{\Omega+\Delta\omega} S(\omega)\, d\omega}\right]. \label{eqn:SNRint} \end{equation} \section{Methodology} \subsection{Artificial Neuron Circuit} In this subsection, the employed artificial neuron model for the SR analysis is described. The circuit of an artificial neuron is shown in Fig. \ref{fig:CirDiagram}, as taken from Calvo and D.R. Chialvo \cite{calvo2006ghostcircuit}. It works mainly based on a Schmitt Trigger monostable. This is a device with a non-dynamical threshold value which selects the input signals through comparison with this threshold value. Hence, when the input signal overcomes the threshold value, it generates a spike in the output. \\ \\ The Schmitt Trigger device can simulate the behavior of neurons to convey information through generating action potentials. An action potential is a roughly $100 \,\text{mV}$ oscillation in the electrical potential across the cell membrane which lasts for about $1 \,\text{ms}$, fig. \ref{fig:ActionPot} shows the waveform of an action potential. For a few milliseconds just after an action potential, it may be virtually impossible to generate another spike, this is called the refractory period as shown in Fig. \ref{fig:ActionPot}. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.3\textheight]{f7} \caption{Artificial neuron circuit based on the monostable Schmitt Trigger.} \label{fig:CirDiagram} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.45\textwidth,height=.25\textheight]{ap.pdf} \caption{The waveform of an action potential. The neuron has a threshold. When the membrane potential overcomes this threshold value, the neuron generates an action potential \cite{crossman2005neuroanatomy}.} \label{fig:ActionPot} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.25\textheight]{Circuito.pdf} \caption{Schematic of the assembled artificial neuron circuit.} \label{fig:Circuit} \end{figure} In addition to the Schmitt trigger device, there are operational amplifiers U1 and U2. Unit U1 is an operational amplifier which receives the input signal and amplifies it in order to feed the first monostable through pin 1B. Unit U2 generates a pulse with a duration of T1, when the input signal overcomes its intrinsic threshold. The value of T1 is determined based on the resistance and the capacitance values. The output of unit U2 through pin 1Q simulates the output of the neuron. When the generated pulse by U2 enters in the descendant flank, the second monostable, i.e., unit U3 fires. It generates a pulse with duration T2. The output of pin 2$\bar{Q}$ of U3 works as the clear signal of the first monostable. Hence, U2 does not generate a new action potential in the descendant flank of every spike. This part of the circuit simulates the refractory period of neurons. This cycle is repeated indefinitely until the stimulation signal is present.\\ \\ The experimental setup to analyze the SR model is shown in Fig. \ref{fig:Setting}. A coherent signal generator is used to generate the periodic input signal. This signal is added to the noise signal generated by the noise source. The summation of these signals enters the artificial neuron circuit. This circuit generates an output signal when the input signal overcomes its intrinsic threshold. The output port is connected to an oscilloscope which saves the signal for later processing, in the experimental setup. \\ \\ \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.5\textheight]{Setting.pdf} \caption{Experimental setup for the SR analysis. The upper left device is a coherent signal generator, the upper right one is a white noise generator, the circuit in the center is the artificial neuron and the bottom device is an oscilloscope which is used for data acquisition.} \label{fig:Setting} \end{figure} \subsection{Pink Noise Generation} In this paper, the color noise is developed by using a pink noise generator. Pink noise is generated through coupling two filters as shown in Fig. \ref{fig:filtropink}. Next, a white noise is used as the input of these filters. Hence, they reduce high frequency components of white noise to produce pink noise. To evaluate the correct acquisition of pink noise, the power spectrum and the characteristic slope in the logarithmic scale is analyzed. \begin{figure}[t!] \centering \includegraphics[width=.44\textwidth,height=.32\textheight]{fp} \caption{Structure of the filter for producing pink noise.} \label{fig:filtropink} \end{figure} \section{Results and Discussion} In this section, the results from the SR model analysis are obtained. The obtained results for pink noise and white noise are also compared with each other. \subsection{Power Spectrum Analysis} The Fourier transform is employed for the output signal of the circuit to obtain its power spectrum. It is worth to mention that the captured Fourier transforms by MATLAB are not shown here. Since the numerical results are not as accurate as the obtained results through the spectrum analyzer. Hence, the obtained spectrums by the spectrum analyzer are used for our analysis in the following.\\ \\ Fig.\ref{fig:FFTW} and Fig.\ref{fig:FFTP} show the power spectrums when white and pink noises are used, respectively. In these figures, the X axis corresponds to the frequency and the Y axis corresponds to the power spectrum amplitude. When the peak of the spectrum matches with the inputs signal frequency, it is possible to conclude that the input noise amplitude is optimal. Thus, the noise amplitude is modified to find the optimum values. It can be seen that the number of jumps in the power spectrum is corresponding to pink noise more than in the white noise case. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.25\textheight]{1} \caption{Power spectrum of the output signal in the presence of a white noise with $2500 \, \text{mV}$ amplitude.} \label{fig:FFTW} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.25\textheight]{3} \caption{Power spectrum of the output signal in the presence of a pink noise with $2900 \,\text{mV}$ amplitude.} \label{fig:FFTP} \end{figure} Through studying different power spectrums, it can be shown that the spectrum amplitude increases when the noise intensity increases. However, when the noise amplitude reaches a specific point, the power spectrum reaches a maximum value. After this point, the peak of power spectrum starts to descend no matter how much the amplitude of pink noise increases. \subsection{Optimal SNR} It is necessary to repeat the power spectrum analysis for different values of noise. It can be seen that a maximum peak is conserved. This peak is selected and it is integrated by using the expression in (\ref{eqn:SNRint}) for different values of noise amplitude. The obtained values were plotted with respect to the noise amplitude in Fig. \ref{fig:SNRW} and Fig. \ref{fig:SNRP} for white and pink noises, respectively. These figures show the amplification factor for the input signal versus different values of white and pink noises, respectively. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.25\textheight]{4} \caption{Signal to Noise Ratio analysis for white noise when the amplitude of input periodic signal is $50 \,\text{mV}$.} \label{fig:SNRW} \end{figure} There is an important difference between white and pink noises. Although the measurements are made for the same voltages and under the same parameters, the maximum performance of SR model is achieved for different optimum noise values. Furthermore, the range of noise values which maximize the SR performance for pink noise is wider than white noise. This range of values for white noise is sharper. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.25\textheight]{P50mv} \caption{Signal to Noise Ratio analysis for pink noise whit an amplitude of $50 \,\text{mV}$ for the input periodic signal} \label{fig:SNRP} \end{figure} \subsection{Comparison between White and Pink Noises} Fig. \ref{fig:transforms} (a) and (b) show the power spectrums for white and pink noises, respectively. It can be seen that the contribution of the input signal frequency corresponding to pink noise, is 100 times less than the spectrum of white noise. It is a surprising finding according to Fig. \ref{fig:comp}. Although the contribution of pink noise in the input signal frequency is 100 times less than white noise amplification of the weak input signal for pink noise is 20 times bigger than for white noise. This result is beneficial since numerous physical and biological phenomena are immerse in pink noise. Here, I poof that their amplification factor is high due to their nature and intrinsic characteristics. It is important to notice that theoretical models, numerical and computational have used white noise since there is a mathematical algorithm to generate it, while there is no mathematical algorithm that is able to generate pink noise. According to this finding, the importance of pink noise to improve amplification factor is clear and it should be implemented in physical models to achieve higher accuracy. It is significant to mention that there is no bibliography which reported this relevant finding and here is the first report about it. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.5\textheight]{2} \caption{(a) White noise power spectrum, (b) Pink noise power spectrum.} \label{fig:transforms} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,height=.25\textheight]{Comparison} \caption{Pink noise contribution to input signal frequency is 100 times smaller than white noise, however, pink noise amplifies the weak input signal 20 times more than white noise.} \label{fig:comp} \end{figure} \section{Conclusion} \label{sec5} In this paper, the SR model for an artificial neuron was implemented. Two schemes for the output signal analysis were used. The SNR analysis method which requires the integration of the maximum peaks is easier than the ISIH method since ISIH analysis, its results are not as precise as the SNR analysis scheme. Moreover, it is not recommended to use programming to obtain the Fourier transform of the output signal, since there are complications for programming and under slight parameter variations, which change the results rapidly. However, using spectrum analyser directly is faster and more reliable.\\ \\ It proved that the SR model amplifies the weak periodic input signal through noise. Thus, the SR model conserves the information of input signals. It shows that the signal is embedded in a noisy environment with the contents of all frequency values and with higher amplitudes than the desired signal. In the detection process, the frequency of the input signal and its multiples are retrieved. All measurements are replicated to validate the reliability of the results. It can be seen that the SR model can detect the input signal when the amplitude of this signal is decreased from 800 mv to 40 mv. Hence, it confirms that the SR model can conserve weak signals and enhance their amplitude. Moreover, the most important conclusion is that an artificial neuron can be modelled as a SR system in the presence of white or pink noise. Pink noise provides a higher performance in terms of amplification factor. Furthermore, pink noise has a wider range for optimum values, while white noise has a narrow optimum range. Hence, it is concluded that neurons are more sensitive to detect the signals which carry pink noise than signals with white noise or without noise. In addition to artificial neurons, the concluded results can be used in all types of systems with a threshold value and excitability behaviour where detection of weak signals embedded in a high amplitude noise is desirable. Since the sensory neuron's noise is limited by external and intrinsic neural noise, it is possible to add pink noise to the signals which are not detectable to retrieve by neurons. Thus, the neurons can detect weak signals, and thus, an extra-sensorial capability is created in a natural way. \section{Acknowledgements} I would like to give special acknowledgement to Keyvan Aghababaiyan, who motivated me to publish the results of this research and who helped me translating and edite it. Special gratitude to Professor Juan Gabriel Ramirez, who guided me and advised me during the experimental development of this research. Thank you so much Marco Gonzalez, for opening me the doors of his laboratory at unlimited times and without reservations and for teaching me data acquisition and analysis techniques. Special thanks Professor Rodolfo Llinas Riascos for inspiring me to study the Nervous System and for motivating me to pursued the unanswered questions. Special gratitude Professor Andres Reyes Lega, the main advisor of this research, his comments and unconditional support were the theoretical foundation of this project. This research was not funded by any institution.
1810.06764
\section{Introduction} In the past decades researchers have focused on iterative strategies to synthesize control policy \cite{bristow2006survey, c33, c34, c35, c6, c4, liu2013nonlinear, LMPC, mania2018simple, schulman2015trust, wu2017scalable, mohajerin2018infinite}. The main idea is to execute the control task or a part of it repeatedly, and use the closed-loop data to automatically update the control policy. Each task execution is often referred to as ``trials" or ``iterations" and it may be performed in simulation or experiment. It is generally required that at each update the control policy guarantees safety. Furthermore, it is desirable that the closed-loop performance improves at each policy update and that the iterative scheme converges to a (local) optimal steady state behavior. Algorithms that iteratively update the control policy and satisfy the above properties have been extensively studied in the literature. Iterative Learning Control (ILC) is a control strategy that allows learning from previous iterations to improve the closed-loop tracking performance \cite{bristow2006survey}. In ILC, at each iteration, the system starts from the same initial condition and the controller objective is to track a given reference, rejecting periodic disturbances. The main advantage of ILC is that information from previous iterations are incorporated into the problem formulation at the next iteration, in order to improve the control policy while guaranteeing safety. Furthermore, it is possible to show that as the number of iteration increases the control policy converges to a steady state (local) optimal behavior \cite{c33, c34, c35, c6, c4, liu2013nonlinear}. Recently, we proposed an ILC algorithm called Learning Model Predictive Control (LMPC), where the controller's goal is to minimize a generic positive definite cost function \cite{LMPC}. At each time, the LMPC solves a finite time optimal control problem, where the data from previous iterations are used to update the terminal constraint and terminal cost, which approximates the value function. In the above mentioned ILC schemes, the data from each iteration are used to update the control policy while guaranteeing safety and performance improvement. However, the computational complexity of these algorithms does not decrease when the policy update has converged, although the controller applies the same or similar control actions at each iteration. Indeed, evaluating the control policy involves the solution to a model-based optimization problem. In this work we propose a model-free data-based policy, which may be used to reduce the computational burden of ILC algorithms which have reached convergence. Model-free iterative algorithms, such as policy search and $Q$-learning, have recently gained popularity. In policy search, the control policy is updated using derivative-free optimization \cite{recht2018tour} or gradient estimation \cite{schulman2015trust}. These algorithms have been successfully tested in simulation scenarios to perform complex locomotions tasks. For more details we refer to \cite{recht2018tour, mania2018simple, schulman2015trust, wu2017scalable, mohajerin2018infinite}. $Q$-learning is an approximate dynamic programming strategy where an optimal cost function for a state input pair is learned from data \cite{bertsekas2005dynamic, recht2018tour}. The optimal cost function is usually approximated using a linear mapping of a state dependent feature vector. These features may be arbitrary nonlinear functions of the states, see \cite[Chapter VI]{bertsekas2005dynamic} for details. In $Q$-learning, the policy is evaluated minimizing the approximated value function at the current state with respect to the control input \cite[Chapter VI]{bertsekas2005dynamic}, \cite{recht2018tour}. In all the aforementioned literature, it is important to distinguish between the strategy used to update the control policy and the method used to evaluate the current policy. This paper focuses on the latter problem. We propose a simple, perhaps the simplest, value function approximation strategy, which may be used to compute a control law from historical state-input data, regardless on the techniques used to generate the data. We build on~\cite{linearLMPC} where we exploit stored input and state trajectories along with a user-defined cost to construct a piecewise-affine approximation of the value function. The value function approximation is defined as a convex combination of the cost associated with the stored closed-loop trajectories. In the present work, we propose to exploit the multipliers from the convex combination of the cost to extract the control action from the stored inputs. The proposed strategy needs to store the input and state trajectories and may not be applied when limited memory storage is available. Furthermore, we proposes a local approximation of the value function, which allows to further reduce the computational burden of the proposed policy evaluation method. Finally, we show that for linear systems subject to convex cost and convex constraints, the data-based policy guarantees safety, stability and performance bounds. We evaluate the proposed strategy on the Berkeley Autonomous Race Car (BARC) platform, and demonstrate that the data-based policy is able to match the performance of our model-based ILC algorithm, while being almost $30$x faster at computing the control inputs. The paper is organized as follows: in Section II we introduce the problem formulation. In Section III we describe the proposed approach. First we show how to use data to construct the safe set and the value function approximation. Afterwards, we introduce the control design. The properties of the proposed approach are discussed in Section IV. Finally, in Section V we test the proposed data-based policy on simulation and experiment, the latter on the Berkeley Autonomous Race Car (BARC) platform. \section{Problem Formulation} Consider the unknown deterministic system \begin{equation} \label{eq:System} x_{t+1} = A x_t + B u_t \end{equation} where $x_t \in \mathbb{R}^n$ and $u_t \in \mathbb{R}^d$ are the system's state and input, respectively. Furthermore, the system is subject to the following state and input constraints, \begin{equation} \label{eq:stateInputConstr} x_t \in \mathcal{X} \text{ and } u_t \in \mathcal{U}, ~\forall t \in \{0,\ldots, T \} \end{equation} where $T$ is the time as which the control task is completed. In the following we assume that closed-loop state and input trajectories starting at different initial states $x_0$ are stored. In particular, for $j \in \{0, \ldots, M\}$ we are given the following input sequences \begin{equation}\label{eq:givenIinputs} \begin{aligned} {\bf{u}}^j &= [u_0^j , \ldots, u_{T_j}^j] \end{aligned} \end{equation} and the associated closed-loop trajectories \begin{equation}\label{eq:givenClosedLoop} \begin{aligned} {\bf{x}}^j &= [x_0^j , \ldots, x_{T_j}^j] \\ \end{aligned} \end{equation} where $x_{t+1}^j = A x_t^j + B u_t^j$ and $T_j$ is the time at which the task is completed. These trajectories will be used to design a data-based policy for the unknown system \eqref{eq:System}. Finally, we defined the cost-to-go associated with the $j$th closed-loop trajectory \begin{equation} \label{eq:RelalizedCost} J^j\big(x_0^j\big)= \sum_{k = 0}^{T_j} h(x^j_k, u^j_k), \end{equation} where $x_k^j$ and $u_k^j$ are the stored state and applied input to system \eqref{eq:System} at time $k$ of the $j$th iteration. \begin{assumption}\label{ass:feasibility} All $M+1$ input and state sequences in \eqref{eq:givenIinputs}-\eqref{eq:givenClosedLoop} are feasible and known. Furthermore, assume that the state sequence in~\eqref{eq:givenClosedLoop} converges to the origin and the terminal input $u_{T_j}^j=0$. \end{assumption} \begin{remark} We have decided to focus on the linear systems \eqref{eq:System} as this will allow us to rigorously characterize the properties of the proposed approach. However, we underline that the computational cost associated with the proposed strategy is independent on the linearity of the controlled system. Thus, the proposed strategy can be implemented also on nonlinear systems as shown in Section~\ref{sec:expResults}. \end{remark} \begin{remark} We have decided to consider a regulation problem to streamline the presentation of the paper. In the Appendix, we show that the proposed strategy can be used to steer system~\eqref{eq:System} to a terminal control invariant set $\mathcal{X}_F$, without losing guarantees on safety and performance. \end{remark} \section{Proposed Approach} In this section we describe the proposed approach. First, we introduce the sampled safe set and value function approximation computed from data, which were first introduced in \cite{LMPC} and \cite{linearLMPC}. Afterward, we show how these quantities are used to evaluate the data-based policy. \subsection{Safe Set} We define the collection of the $M$ closed-loop trajectories in \eqref{eq:givenClosedLoop} as the sampled \textit{Safe Set}, \begin{equation} \mathcal{SS} = \bigcup_{j = 0}^M \bigcup_{t = 0}^{T_j} x_t^j. \notag \end{equation} Notice that for all $x \in \mathcal{SS}$, it exists a sequence of control actions that can steer the system to the origin \cite{LMPC}. Finally, we define the \textit{convex safe set} $\mathcal{CS}$ as \begin{equation}\label{eq:CS} \mathcal{CS} = \text{Conv}\big( \mathcal{SS} \big). \end{equation} $\mathcal{CS}$ will be used in the next section to defined the domain of the approximation to the value function. \subsection{Q-function} In this section we show how the stored data in~\eqref{eq:givenIinputs} and \eqref{eq:givenClosedLoop} are used to approximate the value function. First, given the stored states ${\bf{x}}^j$ and inputs ${\bf{u}}^j$ for $j \in \{0, \ldots, M\}$, we define the cost-to-go associated with each stored state $x_k^j$, \begin{equation} J^j_k(x_k^j) = \sum_{i=k}^{T_j} h(x_i^j, u_i^j). \notag \end{equation} The realized cost-to-go $J^j_k(x_k^j)$ is used to compute the \textit{Q-function} defined as \begin{equation}\label{eq:valueFunc} \begin{aligned} Q(x) = \min_{ \bm\lambda \geq 0} \quad & \sum_{j=0}^{M}\sum_{k=0}^{T_j} \lambda_k^{j} J^j(x_k^j) \\ \text{s.t.}\quad &\sum_{j=0}^{M}\sum_{k=0}^{T_j} \lambda_k^{j} = 1,\\ &\sum_{j=0}^{M}\sum_{k=0}^{T_j} \lambda_k^{j} x_k^j = x. \end{aligned} \end{equation} where $\bm\lambda = [\lambda_{0}^0, \ldots,\lambda_{T_0}^0, \ldots, \lambda_{0}^M,, \ldots,\lambda_{T_M}^M]$. The Q-function $Q(\cdot)$ interpolates the realized cost-to-go over the convex safe set. Moreover, we underline that Problem~\eqref{eq:valueFunc} is a parametric LP and therefore $Q(x)$ is a piecewise affine function of $x$ \cite{borrelli2017predictive}. Finally, we notice that the domain of $Q(\cdot)$ is the convex safe set $\mathcal{CS}$, indeed $\forall x \notin \mathcal{CS}$ the optimization problem \eqref{eq:valueFunc} is not feasible. \subsection{Data-Based Policy} We are finally ready to introduce the data-based policy. At each time $t$, we evaluate the approximation to the value function \eqref{eq:valueFunc} at the current state $x_t$, solving the following optimization problem, \begin{equation}\label{eq:valueFuncEval} \begin{aligned} Q(x_t) = \min_{\bm\lambda_t \geq 0} \quad & \sum_{j=0}^{M}\sum_{k=0}^{T_j} \lambda_{k|t}^{j} J_k^j(x_k^j) \\ \text{s.t.}\quad &\sum_{j=0}^{M}\sum_{k=0}^{T_j} \lambda_{k|t}^{j} = 1,\\ &\sum_{j=0}^{M}\sum_{k=0}^{T_j} \lambda_{k|t}^{j} x_k^j = x_t. \end{aligned} \end{equation} where $\bm\lambda_t = [\lambda_{0|t}^0, \ldots,\lambda_{T_0|t}^0,\ldots, \lambda_{0|t}^M, \ldots \lambda_{T_M|t}^M]$.\\ Let \begin{equation}\label{eq:optimalSol} \bm\lambda_t^* = [\lambda_{0|t}^{0,*}, \ldots, \lambda_{k|t}^{j,*}, \ldots, \lambda_{T_M|t}^{M,*}] \end{equation} be the optimal solution at time $t$ to \eqref{eq:valueFuncEval}, then we apply to system \eqref{eq:System} the following input \begin{equation}\label{eq:policy} u_t = \pi(x_t) = \sum_{j=0}^{M}\sum_{k=0}^{T_j} \lambda_{k|t}^{j,*} u_k^j. \end{equation} Basically, the data-based policy \eqref{eq:valueFuncEval} and \eqref{eq:policy} computes the control input $u_t$ as the weighted sum of stored inputs, where the weights are the solution to the minimization problem \eqref{eq:valueFuncEval}. \subsection{Local Data-Based Policy} In this section we propose a Local Data-Based policy which can be used to limit the computational burden of problem \eqref{eq:valueFuncEval}, when a considerable amount of data is given. First, we define the local $Q$-function $Q_L(\cdot)$ as \begin{equation}\label{eq:localValueFunc} \begin{aligned} Q_L(x_t) = \min_{ \bm \lambda_t \geq 0 } \quad & \sum_{j=0}^{M}\sum_{k \in \mathcal{K}^j(x)} \lambda_{k|t}^{j} J_k^j(x_k^j) \\ \text{s.t.}\quad & \sum_{j=0}^{M}\sum_{k \in \mathcal{K}^j(x)} \lambda_{k|t}^{j} = 1,\\ &\sum_{j=0}^{M}\sum_{k \in \mathcal{K}^j(x)} \lambda_{k|t}^{j} x_k^j = x_t \end{aligned} \end{equation} where $\bm \lambda_t = [\lambda_{t^{0,*}_1|t}^0, \ldots,\lambda_{t^{0,*}_N|t}^0, \ldots, \lambda_{t^{M,*}_1|t}^0, \ldots, \lambda_{t^{M,*}_N|t}]$. The elements of the set $\mathcal{K}^j(x) = \{t^{j,*}_1, \ldots, t^{j,*}_N\}$ are defined as \begin{equation*} \begin{aligned} [t^{j,*}_1, \ldots, t^{j,*}_N] = \arg \min_{\mathbf t} & \quad \sum_{l = 1}^N ||x^j_{t_l} - x||_2 \\ \text{s.t.} & \quad t_i \neq t_j,~ \forall i \neq j \\ & \quad t_i \in \{0, \ldots, T_j \}, \forall i \in \{0, \ldots,N \}. \end{aligned} \end{equation*} For the $j$-th trajectory, the set $\mathcal{K}^j(x)$ collects the indices of the $N$ closest point to the state $x$. Notice that $N \leq \max_{i\in\{0,\ldots,j\}} T_i$ is a user-defined parameter. Finally, we define the local data-based policy where at each time $t$ we solve $Q_L(x_t)$ in~\eqref{eq:localValueFunc}. Then, given the optimal solution $ \bm \lambda_t^*$ to Problem~\eqref{eq:localValueFunc}, we apply the following input \begin{equation}\label{eq:localPolicy} u_t = \pi(x_t) = \sum_{j=0}^{M}\sum_{k \in \mathcal{K}^j(x_t)} \lambda_{k|t}^{j,*} u_k^j \end{equation} to system \eqref{eq:System}. \section{Properties} In this section we analyze the properties of the proposed data-based policy \eqref{eq:valueFuncEval} and \eqref{eq:policy}. We show that the proposed strategy guarantees safety, closed-loop stability and performance bounds. \begin{prop} \textit{(Feasibility)} Consider the closed-loop system~\eqref{eq:System} and \eqref{eq:policy}. Let Assumptions~\ref{ass:feasibility} hold and $\mathcal{CS}$ be the convex safe set defined in \eqref{eq:CS}. If the initial state $x_0 \in \mathcal{CS}$. Then, the data-based policy \eqref{eq:valueFuncEval} and \eqref{eq:policy} is feasible for all time $t\geq0$. \end{prop} \begin{proof} The proof follows from linearity of the system. \\ We assume that at time $t$ the system state $x_t \in \mathcal{CS}$, therefore the optimization problem \eqref{eq:valueFuncEval} is feasible. Let \eqref{eq:optimalSol} be the optimal solution to \eqref{eq:valueFuncEval}, then at the next time step $t+1$ we have \begin{equation*} \begin{aligned} x_{t+1} &= A x_t + B \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} u^j_k \\ & = A \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} x^j_k + B \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} u^j_k \\ & = \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} (A x^j_k + B u_k^j) \in \mathcal{CS}. \end{aligned} \end{equation*} By Assumption~\ref{ass:feasibility} we have that \begin{equation*} \sum_{j=0}^M \lambda^{j,*}_{T_j|t} (A x^j_{T_j} + B u_{T_j}^j)=0 \end{equation*} and therefore \begin{equation*} x_{t+1} = \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} (A x^j_k + B u_k^j) = \sum_{j=0}^M\sum_{k=0}^{T_j} \bar \lambda^{j}_k x^j_k \end{equation*} where $\forall j\in \{0, \ldots M\}$ \begin{equation} \begin{aligned}\label{eq:feasibleSol} &\bar \lambda_0^j = 0, \\ &\bar \lambda_{k_j}^j = \lambda_{k_j-1|t}^{j,*}, \quad \quad \quad \quad \quad \forall k_j \in \{ 1, \ldots, T_j-1 \} \\ &\bar \lambda_{T_j}^j = \lambda_{T_j-1|t}^{j,*} + \lambda_{T_j|t}^{j,*} \end{aligned} \end{equation} is a feasible solution to the optimization problem \eqref{eq:valueFuncEval} at time $t+1$.\\ By assumption we have that at time $t=0$ the state $x_0 \in \mathcal{CS}$. Furthermore, we have shown that if at time $t$ the state $x_t \in \mathcal{CS}$, then at time $t+1$ the state $x_{t+1} \in \mathcal{CS}$ and the optimization problem \eqref{eq:valueFuncEval} is feasible. Therefore by induction we conclude that $x_t \in \mathcal{CS} \subseteq \mathcal{X}, ~\forall t \in \mathbb{Z}_{0+}$ and that the optimization problem \eqref{eq:valueFuncEval} is feasible $\forall t \in \mathbb{Z}_{0+}$. \end{proof} The above \textit{Proposition 1} implies that the data-based policy \eqref{eq:valueFuncEval} and \eqref{eq:policy} satisfies the input constraints, and the closed-loop system \eqref{eq:System} and \eqref{eq:policy} satisfies the state constraints at all time instants, i.e. $u_t \in \mathcal{U}$ and $x_t \in \mathcal{X}, ~\forall t \in \mathbb{Z}_{0+}$. \begin{assumption}\label{ass:cost} The stage cost $h(\cdot, \cdot)$ is a continuous convex function and $\forall u \in \mathcal{U}$ it satisfies \begin{equation} \begin{aligned} h(0,u) = 0,\textrm{ and}~ h(x,u) \succ 0 ~ \forall ~ x \in&~{\mathbb R}^n \setminus \{0\}. \notag \end{aligned} \end{equation} \end{assumption} \begin{prop} \textit{(Convergence)} Consider the closed-loop system~\eqref{eq:System} and \eqref{eq:policy}. Let Assumptions~\ref{ass:feasibility}-\ref{ass:cost} hold and $\mathcal{CS}$ be the convex safe set defined in \eqref{eq:CS}. If the initial state $x_0 \in \mathcal{CS}$. Then, the origin of the closed-loop system \eqref{eq:System} and \eqref{eq:policy} is asymptotically stable. \end{prop} \begin{proof}In the following we show that the approximated value function $Q(\cdot)$ from~\eqref{eq:valueFuncEval} is a Lyapunov function for the origin of the closed loop system \eqref{eq:System} and \eqref{eq:policy}. Continuity of $Q(\cdot)$ can be shown as in \cite[Chapter 7]{borrelli2017predictive}. Moreover from \eqref{eq:RelalizedCost} and Assumption~2 we have that $Q(x) \succ 0 ~ \forall ~ x \in \mathcal{CS} \setminus \{0\}$ and $Q(0)=0$. Thus, we need to show that $Q(\cdot)$ is decreasing along the closed loop trajectory.\\ By feasibility of Problem~\eqref{eq:valueFuncEval} from Theorem~1, we have that at time $t$ \begin{equation}\label{eq:lyapunovPart1} \begin{aligned} Q(x_t) &= \sum_{j=0}^M \sum_{k=0}^{T_j} \lambda_{k|t}^{j,*} {J}_k^j(x_k^j) = \sum_{j=0}^M \sum_{k=0}^{T_j} \lambda_{k|t}^{j,*} \sum_{i=k}^{T_j} h(x_i^j, u_i^j) \\ & = \sum_{j=0}^M \sum_{k=0}^{T_j} \lambda_{k|t}^{j,*} h(x_k^j, u_k^j) + \sum_{j=0}^M \sum_{k=0}^{T_j-1} \lambda_{k|t}^{j,*} {J}_{k+1}^j(x_{k+1}^j). \end{aligned} \end{equation} We notice that the summation of the cost-to-go in the above expression can be rewritten as \begin{equation}\label{eq:lyapunovPart2} \sum_{j=0}^M \sum_{k=0}^{T_j-1} \lambda_{k|t}^{j,*} {J}_{k+1}^j(x_{k+1}^j) = \sum_{j=0}^M \sum_{k=0}^{T_j} \bar \lambda_{k|t}^{j} {J}_k^j(x_k^j) \geq Q(x_{t+1}), \end{equation} where $\bar \lambda_{k|t}^{j}$ is the candidate solution defined in \eqref{eq:feasibleSol}. Finally, from equations \eqref{eq:lyapunovPart1} and \eqref{eq:lyapunovPart2} we conclude that the optimal cost is a decreasing Lyapunov function along the closed loop trajectory, \begin{equation}\label{eq:LyapProof2} \begin{aligned} Q(x_{t+1})-Q(x_{t}) \leq - & \sum_{j=0}^M \sum_{k=0}^{T_j} \lambda_{k|t}^{j,*} h(x_k^j, u_k^j) < 0, \\ &~~~~~~~~~~~~\forall~x_t \in R^n \setminus \{0\}. \end{aligned} \end{equation} Equation (\ref{eq:LyapProof2}), the positive definitiveness of $h(\cdot, \cdot)$ and the continuity of $Q(\cdot)$ imply that the origin of the closed-loop system~\eqref{eq:System} and \eqref{eq:policy} is asymptotically stable. \end{proof} \begin{prop} \textit{(Cost)} Consider the closed-loop system~\eqref{eq:System} and \eqref{eq:policy}. Let Assumptions~\ref{ass:feasibility}-\ref{ass:cost} hold and $\mathcal{CS}$ be the convex safe set defined in \eqref{eq:CS}. If the initial state $x_0 \in \mathcal{CS}$. Then, the $Q$-function at $x_0$, $Q(x_0)$, upper bounds the cost associated with the trajectory of closed-loop system~\eqref{eq:System} and \eqref{eq:policy}, \begin{equation}\label{eq:closedLoopCost} J\big(x_0\big) = \sum_{k=0}^{\infty} h(x_k, u_k) \leq Q\big(x_0\big) \end{equation} where $\{x_0 , \ldots, x_{t}, \ldots\}$ and $ \{u_0 , \ldots, u_{t}, \ldots\}$ are the closed-loop trajectory and associated input sequence, respectively. \end{prop} \begin{proof} From \eqref{eq:LyapProof2} and convexity of $h(\cdot, \cdot)$, we have that \begin{equation} Q(x_t) \geq h(x_t, u_t) + Q(x_{t+1}) \notag \end{equation} Using the above equation recursively and from the asymptotic convergence to the origin we have that \begin{equation} \begin{aligned} Q(x_0) & \geq h(x_0, u_0) + Q(x_{1}) \\ & \geq \sum_{k=0}^{\infty} h(x_k, u_k) + \lim_{k \rightarrow \infty} Q(x_{k})= \sum_{k=0}^{\infty} h(x_k, u_k). \notag \end{aligned} \end{equation} \end{proof} Note that, if the optimal closed-loop trajectory from $x_0=x_s$ is given, then the approximated value function $Q(x_s)$ will be the optimal cost-to-go from $x_s$. Consequently, \textit{Proposition~3} implies that the proposed data-based policy will behave optimally for $x_0=x_s$, if the optimal behavior from $x_0=x_s$ has been observed. \section{Examples} In this section we first test the data-based policy \eqref{eq:valueFuncEval} and \eqref{eq:policy} on a double integrator system. Afterwards, we test the local data-based policy \eqref{eq:localValueFunc} and \eqref{eq:localPolicy} on the Berkeley Autonomous Racing Car (BARC) platform. \subsection{Example I: Double Integrator} Consider the following discrete time Constrained Linear Quadratic Regulator (CLQR) problem \begin{equation}\label{eq:CLQR} \begin{aligned} J^*\big(x_0\big) =\min_{\bar u_0, \bar u_1,\ldots} & \quad \sum\limits_{k=0}^{\infty} \Big[ ||\bar x_k||_2^2 + ||\bar u_k||_2^2 \Big] \\ \textrm{s.t.}&\\ &\quad \bar x_{k+1}= \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \bar x_k + \begin{bmatrix} 0 \\ 1 \end{bmatrix} \bar u_k,~\forall k\geq 0 \\ &\quad \begin{bmatrix} -10 \\ -10 \end{bmatrix} \leq \bar x_k \leq \begin{bmatrix} 10 \\ 10 \end{bmatrix} ~ \forall k\geq 0 \\ &\quad -1 \leq \bar u_k \leq 1 ~~\forall k\geq 0, \\ &\quad \bar x_0=x_0=[-1, 3]^\top. \end{aligned} \end{equation} First, we construct the convex safe set using one solution to the above CLQR and we empirically validate \textit{Proposition}~1-3. Afterwards, we analyze the effect of the amount of data on the value function approximation and the data-based policy \eqref{eq:valueFuncEval} and \eqref{eq:policy}. \subsubsection{Properties verification} First, we compute and store the optimal solution to the CLQR problem \eqref{eq:CLQR}, \begin{equation}\label{eq:closedLoopLQR} \begin{aligned} [\bar x_0^*, \bar x_1^*, \ldots, \bar x_T^*] \\ [\bar u_0^*, \bar u_1^*, \ldots, \bar u_T^*] \end{aligned} \end{equation} where $T$ is the time index at which $||\bar x_T^*||_2^2 \leq \epsilon = 10^{-10}$. \begin{figure}[h!] \centering \includegraphics[width= \columnwidth]{closedLoopLQR_pdf}. \caption{Closed-loop trajectories performed by the data-based policy.} \label{fig:closedLoopLQR} \end{figure} The stored optimal trajectory in \eqref{eq:closedLoopLQR} is used to build the convex safe set $\mathcal{CS}$ in \eqref{eq:CS} and the approximation to the value function $Q(\cdot)$ in \eqref{eq:valueFunc}. We tested the data-based policy for $x_0 = \bar x_0^*$ and for other $10$ randomly picked initial conditions inside $\mathcal{CS}$. We denote the resulting closed-loop trajectories and associated input sequences for $j \in \{0, \ldots, 9 \}$ as \begin{equation}\label{eq:givenStateAndInputResults} \begin{aligned} {\bf{x}}^j &= [x_0^j , \ldots, x_{T_j}^j]\\ {\bf{u}}^j &= [u_0^j , \ldots, u_{T_j}^j] \\ \end{aligned}. \end{equation} Figure~\ref{fig:closedLoopLQR} shows the closed-loop trajectories, we confirm that state and input constraints are satisfies, accordingly to \textit{Proposition~1}. Furthermore, we notice that the closed-loop trajectories converge to the origin as we expected from \textit{Proposition~2}. It is interesting to notice that for $x_0 = \bar x_0^*$ the closed-loop trajectory performed by the data-based policy overlaps with the optimal one. Moreover, we analyze the cost associated with the closed-loop trajectories~\eqref{eq:closedLoopCost}. Table~\ref{table:comparisonLQR} shows the realized cost~\eqref{eq:closedLoopCost} and the approximated value function $Q(\cdot)$ evaluated at different initial conditions. We confirm that $Q\big(x_0\big)$ upper bounds the performance of the closed-loop trajectory, as shown in \textit{Proposition 3}. \begin{table}[h!] \centering\caption{Comparison of the realized cost and value function for different initial conditions}\label{table:comparisonLQR} \begin{tabular}{l|l|l}\toprule $~~~~~~~~~~x_0$ & $J\big(x_0\big)$ & $Q\big(x_0\big)$ \\ \midrule $[-1, 3]^\top$& $112.53$ & $112.53$ \\ $[2.9033, 1.2959]^\top$ & $78.60$ & $89.60$ \\ $[3.9495, 0.3921]^\top$& $62.00$ & $73.97$ \\ $[3.3673, 0.8315]^\top$& $66.45$ & $79.23$ \\ $[3.4349, 0.7243]^\top$& $62.96$ & $76.79$ \\ $[3.9253, 0.0874]^\top$& $50.37$ & $63.69$ \\ $[3.1189, 0.9013]^\top$& $63.11$ & $78.18$ \\ $[3.8963, 0.1645]^\top$& $52.12$ & $65.74$ \\ $[2.5449, 1.0898]^\top$& $58.04$ & $76.85$ \\ $[3.4751, 0.6212]^\top$& $59.22$ & $74.06$ \\ $[2.5770, 1.1763]^\top$& $63.34$ & $80.50$ \\ \bottomrule \end{tabular} \end{table} \subsubsection{The effect of data} Finally, we empirically analyze the effect of data on the $Q$-function and the data-based policy. First, we construct two approximations to the value function: $Q^{1}(\cdot)$ using \eqref{eq:closedLoopLQR} and the $10$ stored state and input trajectories computed in the previous subsection \eqref{eq:givenStateAndInputResults}, and $Q^{2}(\cdot)$ using \eqref{eq:closedLoopLQR} and the optimal solution to the CLQR for $\bar x_0 = [2.9033, 1.2959]$. Afterwards, we run the data-based policy using $Q^1(\cdot)$ and $Q^2(\cdot)$. Table~II shows the cost associated with the closed-loop trajectories $J^i(\cdot)$ and the value function approximation $Q^i(\cdot)$, for $i = \{1,2\}$. We notice that $Q^1(x_0)$ lower bounds $Q(x_0)$ from Table~I and, therefore, better approximates the value function. However, the realized cost $J^1(x_0)$ does not improve with respect to $J(x_0)$ from Table~I. On the other hand, we notice that the data-based policy constructed using $Q^2(\cdot)$ is able to improve the closed-loop performance $J^2(x_0)$. It is interesting to notice that $Q^1(x_0)$ is constructed using one optimal trajectory and $10$ feasible trajectories, whereas $Q^2(x_0)$ is constructed using just two optimal trajectories. This result is interesting and it suggests that not all data points are equally valuable. \begin{table}[h!] \centering\caption{Comparison of the realized cost and value function for different initial conditions}\label{table:comparisonLQR} \begin{tabular}{l | ll | ll} \toprule $~~~~~~~~~~x_0$ & $J^{1}\big(x_0\big)$ & $Q^{1}\big(x_0\big)$ & $J^{2}\big(x_0\big)$ & $Q^{2}\big(x_0\big)$ \\ \midrule $[-1, 3]^\top$ & $112.53$ & $112.53$ & $112.53$ & $112.53$ \\ $[2.9033, 1.2959]^\top$& $78.60$ & $78.60$ & $72.89$ & $72.89$ \\ $[3.9495, 0.3921]^\top$& $62.00$ & $62.00$ & $59.43$ & $62.12$ \\ $[3.3673, 0.8315]^\top$& $66.45$ & $66.45$ & $61.86$ & $66.39$ \\ $[3.4349, 0.7243]^\top$& $62.96$ & $62.96$ & $58.97$ & $64.38$ \\ $[3.9253, 0.0874]^\top$& $50.37$ & $50.37$ & $49.24$ & $54.57$ \\ $[3.1189, 0.9013]^\top$& $63.11$ & $63.11$ & $58.76$ & $65.04$ \\ $[3.8963, 0.1645]^\top$& $52.12$ & $52.12$ & $50.73$ & $55.86$ \\ $[2.5449, 1.0898]^\top$& $58.04$ & $58.04$ & $53.85$ & $62.65$ \\ $[3.4751, 0.6212]^\top$& $59.22$ & $59.22$ & $55.81$ & $62.12$ \\ $[2.5770, 1.1763]^\top$& $63.34$ & $63.34$ & $58.63$ & $65.74$ \\ \bottomrule \end{tabular} \end{table} \subsection{Example II: Autonomous Racing}\label{sec:expResults} In this Section we test the proposed control strategy on a 1/10-scale open source vehicle platform called the Berkeley Autonomous Race Car (BARC)\footnote{More information at the project site \href{http://www.barc-project.com/}{barc-project.com}}. The BARC is equipped with an inertial measurement unit, encoders, and an ultrasound-based indoor GPS system. The vehicle has an Odroid XU4 which is used for collecting data and running the state estimator. Finally, the computation are performed on a MSI laptop with an intel CORE i7. A video of the experiments can be found here: {\footnotesize{ \url{https://youtu.be/pB2pTedXLpI}}}. The control task is to drive the vehicle continuously around the track minimizing the lap time, while being within the track boundaries. The state vector is \begin{equation} x = [v_{x}, v_y, w_z, e_{\psi}, s, e_y]^\top \notag \end{equation} where $v_{x}, v_y$ and $w_z$ represent the vehicle's longitudinal, lateral and angular velocity in the body fixed frame. The position of the system is measured with respect to the curvilinear reference frame \cite{micaelli}, where $s$ represents the progress of the vehicle along the centerline of the track, $e_{\psi}$ and $e_y$ represent the heading angle and lateral distance error between the vehicle and the path. It is important to underline that, given the lane boundaries $e_{y_{min}}$ and $e_{y_{max}}$, the feasible region $\mathcal{X} = \{ x \in \mathbb{R}^n : e_{y_{min}} \leq e_6^\top x \leq e_{y_{max}} \}$ for $e_6=[0,0,0,0,0,1]^\top$ is a convex set. The control input vector is $u=[\delta, a]$ where $\delta$ and $a$ are the steering angle and acceleration, respectively. The input constraints are \begin{equation} \begin{aligned} -0.25 [\text{rad}] \leq &\delta \leq 0.25 [\text{rad}]\\ -0.7 [\text{m/s}^2] \leq& a \leq 2 [\text{m/s}^2]. \notag \end{aligned} \end{equation} Finally, we underline that the autonomous racing problem is a repetitive task and the goal is not to steer the system to the origin. Therefore, we use the method from~\cite{cdcLMPC} to apply the proposed strategy to the autonomous racing repetitive control problem. In particular, we define the set of state beyond the finish line of the track of length $L$, $\mathcal{X}_F = \{ x \in \mathbb{R}^6 : e_5^\top x \geq L \}$ and we use the set $\mathcal{X}_F$ to compute the cost associated with the stored trajectories \begin{equation*} h(x,u) = \begin{cases} 1 & \mbox{If } x \notin \mathcal{X}_F \\ 0 & \mbox{If } x\in \mathcal{X}_F \end{cases}. \end{equation*} \begin{figure}[h!] \centering \includegraphics[width= \columnwidth]{OvalClosedLoop_pdf}. \caption{In red squares are shown the closed-loop trajectories performed by the data-based policy on the oval-shaped track. In blue circles are reported three trajectories in the sampled safe set. Finally, the green dashed line marks the centerline of the track.} \label{fig:closedLoopOval} \end{figure} For the first $29$ laps of the experiment, we run the Learning Model Predictive Controller (LMPC) from \cite{cdcLMPC} to learn a fast trajectory which drives the vehicle around the track. From the $30$th lap, we run the local data-based policy \eqref{eq:localValueFunc} and \eqref{eq:localPolicy} using the latest $M = 8$ laps and $N = 10$ stored data for each lap. Therefore, the control action is computed upon solving the small optimization problem \eqref{eq:localValueFunc} where $[\lambda_{0|t}^0, \ldots,\lambda_{k|t}^j,\ldots, \lambda_{T_M|t}^M] \in \mathbb{R}^{M|\mathcal{K}^{j}(x)|}$ with $M|\mathcal{K}^{j}(x)| = 80$. \begin{figure}[h!] \centering \includegraphics[width= \columnwidth]{L_shapeClosedLoop_pdf} \caption{In red squares are shown the closed-loop trajectories performed by the data-based policy on the L-shaped track. In blue circles are reported three trajectories in the sampled safe set. Finally, the green dashed line marks the centerline of the track.} \label{fig:closedLoopLshape} \end{figure} \begin{figure}[h!] \centering \includegraphics[width= \columnwidth]{OvalStatesInputs_pdf}. \caption{Closed-loop trajectory and associated inputs of the data-Based policy and LMPC on the oval-shaped track.} \label{fig:stateInputOval} \end{figure} We tested the controller on an oval-shaped and L-shaped tracks. Figures~\ref{fig:closedLoopOval}-\ref{fig:stateInputLshape} show that the local data-based policy \eqref{eq:localValueFunc} and \eqref{eq:localPolicy} is able to drive the vehicle around the track satisfying input and state constraints. Furthermore, we notice that the closed-loop trajectories generated with the local data-based policy lies in the convex hull of the sampled safe set $\mathcal{SS}$, which is constructed from the last $8$ trajectories performed by the LMPC. It is interesting to notice that the real system is nonlinear but smooth and, for this reason, the system dynamics can be locally linearized. Intuitively, the existence of a local linear model allows us to use the local data-based policy to safely drive the vehicle. Indeed at each time $t$ the controller uses only the historical data close to the system's state $x_t$. \begin{figure}[h!] \centering \includegraphics[width= \columnwidth]{L_shapeStatesInputs_pdf} \caption{Closed-loop trajectory and associated inputs of the data-Based policy and LMPC on the L-shaped track.} \label{fig:stateInputLshape} \end{figure} Figures~\ref{fig:lapTimeOval}-\ref{fig:lapTimeLshape} report the lap time over the lap number. We notice that the data-based policy is able to safely drive the vehicle around the track, without hurting the closed-loop performance. In particular, the data-based policy is able to replicate the best lap times performed by the LMPC controller on both tracks. Finally, we analyze the computational time. We compare the computational cost associated with the proposed data-based policy and with the LMPC. Table~\ref{table:computationalTime} shows that on average it took $\sim1.3$ms to evaluate the proposed data-based policy and $\sim29.5$ms to evaluate the LMPC policy. \begin{table}[h!] \centering\caption{Comparison of computational time}\label{table:computationalTime} \begin{tabular}{lrrrr} \toprule $ ~$ & \text{Avarage} & \text{Min} & \text{Max} & Std Deviation\\ \midrule $\text{LMPC}$ & $29.5$ms & $21.8$ms & $50.0$ms & $6.1$ms \\ $\text{Data-Based Policy}$ & $1.3$ms & $1.1$ms & $2.3$ms & $0.2$ms \\ \bottomrule \end{tabular} \end{table} \begin{figure}[h!] \centering \includegraphics[width= \columnwidth]{OvalLapTime_pdf} \caption{Lap time on oval-shaped track over the lap number. At the $30$th lap the data-based policy drives the vehicle around the track without degrading the closed loop-performance.}\label{fig:lapTimeOval} \end{figure} \begin{figure}[h!] \centering \includegraphics[width= \columnwidth]{L_shapeLapTime_pdf} \caption{Lap time on L-shaped track over the lap number. At the $30$th lap the data-based policy drives the vehicle around the track without degrading the closed loop-performance.}\label{fig:lapTimeLshape} \end{figure} \section{Conclusions}\label{sec:conclusions} In this work we have proposed a simple strategy to construct a data-based policy. Firstly, we used historical data to construct a global and local $Q$-function, which approximates the value function. Afterwards, we presented the data-based policy evaluates the $Q$-function and computes the control action from the stored input sequences. We showed that the proposed strategies guarantees safety, stability and performance bounds. Finally, we tested the proposed data-based policy on an autonomous racing example. We show that the proposed strategy matches the performance of our ILC controller, while being $30$x faster at computing the control input. \section{Acknowledgment} Some of the research described in this review was funded by the Hyundai Center of Excellence at the University of California, Berkeley. This work was also sponsored by the Office of Naval Research. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Office of Naval Research or the US government. \section{Appendix} In this Appendix, we show that the proposed data-based policy may be used to steer a linear time invariant system to a terminal invariant set $\mathcal{X}_F$. In order to prove that the properties from Propositions~1-3 hold also in this settings the following assumptions must hold. \begin{assumption}\label{assAPP:vertices} The terminal set $\mathcal{X}_F$ is defined by the convex hull of the terminal state of the stored trajectories~\eqref{eq:givenClosedLoop}, i.e. $\mathcal{X}_F = \text{Conv}\big( \cup_{j=0}^M x_{T_j}^j \big)$. \end{assumption} \begin{assumption}\label{assAPP:feasibility} All $M+1$ input and state sequences in \eqref{eq:givenIinputs}-\eqref{eq:givenClosedLoop} are feasible and known. Furthermore, assume that the state sequence in~\eqref{eq:givenClosedLoop} converges to the terminal set~$\mathcal{X}_F$ and the terminal input $u_{T_j}^j$ keeps the evolution of the system~\eqref{eq:System} into $\mathcal{X}_F$. More formally, we assume that $x_{T_j}^j \in \mathcal{X}_F, \forall j \in \{0,\ldots,M\}$ and $ A x_{T^j} + B u_{T^j} \in \mathcal{X}_F$. \end{assumption} \begin{prop} \textit{(Feasibility)} Consider the closed-loop system~\eqref{eq:System} and \eqref{eq:policy}. Let Assumptions~\ref{assAPP:vertices}-\ref{assAPP:feasibility} hold and $\mathcal{CS}$ be the convex safe set defined in \eqref{eq:CS}. If the initial state $x_0 \in \mathcal{CS}$. Then, the data-based policy \eqref{eq:valueFuncEval} and \eqref{eq:policy} is feasible for all time $t\geq0$. \end{prop} \begin{proof} The proof follows from linearity of the system. \\ We assume that at time $t$ the system state $x_t \in \mathcal{CS}$, therefore the optimization problem \eqref{eq:valueFuncEval} is feasible. Let \eqref{eq:optimalSol} be the optimal solution to \eqref{eq:valueFuncEval}, then at the next time step $t+1$ we have \begin{equation*} \begin{aligned} x_{t+1} &= A x_t + B \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} u^j_k \\ & = A \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} x^j_k + B \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} u^j_k \\ & = \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} (A x^j_k + B u_k^j) \in \mathcal{CS}. \end{aligned} \end{equation*} By Assumption~\ref{assAPP:feasibility} we have that for all $\forall j\in \{0, \ldots M\}$ it exist $\lambda^j_k \geq 0$ such that $\sum_{k=0}^M \lambda_k^j=1$ and \begin{equation*} \begin{aligned} \sum_{j=0}^M \lambda^{j,*}_{T_j|t} (A x^j_{T_j} + B u_{T_j}^j)&= \sum_{j=0}^M \lambda^{j,*}_{T_j|t} \sum_{k=0}^M \lambda_k^j x_{T_k}^k \\ &= \sum_{k=0}^M \sum_{j=0}^M \lambda^{j,*}_{T_j|t} \lambda_k^j x_{T_k}^k = \sum_{k=0}^M \tilde \lambda_k x_{T_k}^k \end{aligned} \end{equation*} where $\forall k\in \{0, \ldots M\}$ we defined $\tilde \lambda_k = \sum_{i=0}^M \lambda^{i,*}_{T_i|t} \lambda_k^i$. It follows that \begin{equation*} x_{t+1} = \sum_{j=0}^M\sum_{k=0}^{T_j} \lambda^{j,*}_{k|t} (A x^j_k + B u_k^j) = \sum_{j=0}^M\sum_{k=0}^{T_j} \bar \lambda^{j}_k x^j_k \end{equation*} where $\forall j\in \{0, \ldots M\}$ \begin{equation} \begin{aligned}\label{eq:APPfeasibleSol} &\bar \lambda_0^j = 0, \\ &\bar \lambda_{k_j}^j = \lambda_{k_j-1|t}^{j,*}, \quad \quad \quad \quad \quad \forall k_j \in \{ 1, \ldots, T_j-1 \} \\ &\bar \lambda_{T_j}^j = \lambda_{T_j-1|t}^{j,*} + \tilde \lambda_j \end{aligned} \end{equation} is a feasible solution to the optimization problem \eqref{eq:valueFuncEval} at time $t+1$.\\ By assumption we have at time $t=0$ the state $x_0 \in \mathcal{CS}$. Furthermore, we have shown that if at time $t$ the state $x_t \in \mathcal{CS}$, then at time $t+1$ the state $x_{t+1} \in \mathcal{CS}$ and the optimization problem \eqref{eq:valueFuncEval} is feasible. Therefore by induction we conclude that $x_t \in \mathcal{CS} \subseteq \mathcal{X}, ~\forall t \in \mathbb{Z}_{0+}$ and that the optimization problem \eqref{eq:valueFuncEval} is feasible $\forall t \in \mathbb{Z}_{0+}$. \end{proof} In order to prove convergence we make the following assumption on the stage cost. \begin{assumption}\label{assAPP:cost} The stage cost $h(\cdot, \cdot)$ is a continuous convex function and $\forall u \in \mathcal{U}$ it satisfies \begin{equation} \begin{aligned} h(x,u) = 0, \forall x \in \mathcal{X}_F \textrm{ and}~ h(x,u) \succ 0 ~ \forall ~ x \in&~{\mathbb R}^n \setminus \{\mathcal{X}_F\}. \notag \end{aligned} \end{equation} \end{assumption} \begin{prop} \textit{(Convergence)} Consider the closed-loop system~\eqref{eq:System} and \eqref{eq:policy}. Let Assumption~\ref{assAPP:vertices}-\ref{assAPP:cost} hold and $\mathcal{CS}$ be the convex safe set defined in \eqref{eq:CS}. If the initial state $x_0 \in \mathcal{CS}$. Then, the origin of the closed-loop system \eqref{eq:System} and \eqref{eq:policy} is asymptotically stable. \end{prop} \begin{proof} The proof follows from the proof of Proposition~2. In particular, the candidate solution~\eqref{eq:APPfeasibleSol} may be exploited to show that $Q(\cdot)$ is Lyapunov function along the closed-loop trajectory. \end{proof} \begin{prop} \textit{(Cost)} Consider the closed-loop system~\eqref{eq:System} and \eqref{eq:policy}. Let Assumptions~\ref{assAPP:vertices}-\ref{assAPP:cost} hold and $\mathcal{CS}$ be the convex safe set defined in \eqref{eq:CS}. If the initial state $x_0 \in \mathcal{CS}$. Then, the $Q$-function at $x_0$, $Q(x_0)$, upper bounds the cost associated with the trajectory of closed-loop system~\eqref{eq:System} and \eqref{eq:policy}, \begin{equation*} J\big(x_0\big) = \sum_{k=0}^{\infty} h(x_k, u_k) \leq Q\big(x_0\big) \end{equation*} where $\{x_0 , \ldots, x_{t}, \ldots\}$ and $ \{u_0 , \ldots, u_{t}, \ldots\}$ are the closed-loop trajectory and associated input sequence, respectively. \end{prop} \begin{proof} The proof follows as in Proposition~3. \end{proof} \bibliographystyle{IEEEtran}
1709.00158
\section{Conclusion and Discussion} Understanding human brain mechanism and adapting it to different disciplines has always been of interest in various disciplines, such as in deep learning~\cite{hinton2007learning,hinton2006fast}. Ballard defined deictic computation~\cite{ballard1997deictic} that benefits computational methods to study the connection between body and real world movements, to cognitive tasks. Roger B. Nelsen’s ``Proofs without Words''~\cite{mackenzie1993proofs} and Martin Gardner’s ``aha! Solutions''~\cite{gardner1978aha} encouraged the present research to focus on a cognitive-oriented approach for linear mapping transformation between two shapes. The proposed method iterates over different abstractions of image frames, from the most abstracted to the most detailed, while tuning the top transformations of previous iteration to obtain best mapping linear transformations. The proposed method is implemented and tested over variety of inputs. The method is assessed for its accuracy on determining linear mapping transformations. The experiments showed that the output of the method is reliable even under challenging conditions such as deformed and noise image frames. Additionally, the size of abstraction matrix ($\Gamma$) is independent from the size of the input images frames, and accordingly the computational cost of the proposed method is independent from the resolution of input image frames. \section{Introduction}} \IEEEPARstart{V}{ision} system is studied in orthogonal disciplines spanning from neurophysiology and psychophysics to computer science all with uniform objective: understand the vision system and develop it into an integrated theory of vision. In general, vision or visual perception is the ability of information acquisition from environment, and it's interpretation. According to Gestalt theory, visual elements are perceived as patterns of wholes rather than the sum of constituent parts~\cite{koffka2013principles}. The Gestalt theory through \textit{emergence}, \textit{invariance}, \textit{multistability}, and \textit{reification} properties (aka Gestalt principles), describes how vision recognizes an object as a \textit{whole} from constituent parts. There is an increasing interested to model the cognitive aptitude of visual perception; however, the process is challenging. In the following, a challenge (as an example) per object and motion perception is discussed. \subsection{Why do things look as they do?} In addition to Gestalt principles, an object is characterized with its spatial parameters and material properties. Despite of the novel approaches proposed for material recognition (e.g.,~\cite{sharan2013recognizing}), objects tend to get the attention. Leveraging on an object's spatial properties, material, illumination, and background; the mapping from real world 3D patterns (distal stimulus) to 2D patterns onto retina (proximal stimulus) is many-to-one non-uniquely-invertible mapping~\cite{dicarlo2007untangling,horn1986robot}. There have been novel biology-driven studies for constructing computational models to emulate anatomy and physiology of the brain for real world object recognition (e.g.,~\cite{lowe2004distinctive,serre2007robust,zhang2006svm}), and some studies lead to impressive accuracy. For instance, testing such computational models on gold standard controlled shape sets such as Caltech101 and Caltech256, some methods resulted $<$60\% true-positives~\cite{zhang2006svm,lazebnik2006beyond,mutch2006multiclass,wang2006using}. However, Pinto et al.~\cite{pinto2008real} raised a caution against the pervasiveness of such shape sets by highlighting the unsystematic variations in objects features such as spatial aspects, both between and within object categories. For instance, using a V1-like model (a neuroscientist's null model) with two categories of systematically variant objects, a rapid derogate of performance to 50\% (chance level) is observed~\cite{zhang2006svm}. This observation accentuates the challenges that the infinite number of 2D shapes casted on retina from 3D objects introduces to object recognition. Material recognition of an object requires in-depth features to be determined. A mineralogist may describe the luster (i.e., optical quality of the surface) with a vocabulary like greasy, pearly, vitreous, resinous or submetallic; he may describe rocks and minerals with their typical forms such as acicular, dendritic, porous, nodular, or oolitic. We perceive materials from early age even though many of us lack such a rich visual vocabulary as formalized as the mineralogists~\cite{adelson2001seeing}. However, methodizing material perception can be far from trivial. For instance, consider a chrome sphere with every pixel having a correspondence in the environment; hence, the material of the sphere is hidden and shall be inferred implicitly~\cite{shafer2000color,adelson2001seeing}. Therefore, considering object material, object recognition requires surface reflectance, various light sources, and observer's point-of-view to be taken into consideration. \subsection{What went where?} Motion is an important aspect in interpreting the interaction with subjects, making the visual perception of movement a critical cognitive ability that helps us with complex tasks such as discriminating moving objects from background, or depth perception by motion parallax. Cognitive susceptibility enables the inference of 2D/3D motion from a sequence of 2D shapes (e.g., movies~\cite{niyogi1994analyzing,little1998recognizing,hayfron2003automatic}), or from a single image frame (e.g., the pose of an athlete runner~\cite{wang2013learning,ramanan2006learning}). However, its challenging to model the susceptibility because of many-to-one relation between distal and proximal stimulus, which makes the local measurements of proximal stimulus inadequate to reason the proper global interpretation. One of the various challenges is called \textit{motion correspondence problem}~\cite{attneave1974apparent,ullman1979interpretation,ramachandran1986perception,dawson1991and}, which refers to recognition of any individual component of proximal stimulus in frame-1 and another component in frame-2 as constituting different glimpses of the same moving component. If one-to-one mapping is intended, $n!$ correspondence matches between $n$ components of two frames exist, which is increased to $2^n$ for one-to-any mappings. To address the challenge, Ullman~\cite{ullman1979interpretation} proposed a method based on nearest neighbor principle, and Dawson~\cite{dawson1991and} introduced an auto associative network model. Dawson's network model~\cite{dawson1991and} iteratively modifies the activation pattern of local measurements to achieve a stable global interpretation. In general, his model applies three constraints as it follows \begin{inlinelist} \item \textit{nearest neighbor principle} (shorter motion correspondence matches are assigned lower costs) \item \textit{relative velocity principle} (differences between two motion correspondence matches) \item \textit{element integrity principle} (physical coherence of surfaces) \end{inlinelist}. According to experimental evaluations (e.g.,~\cite{ullman1979interpretation,ramachandran1986perception,cutting1982minimum}), these three constraints are the aspects of how human visual system solves the motion correspondence problem. Eom et al.~\cite{eom2012heuristic} tackled the motion correspondence problem by considering the relative velocity and the element integrity principles. They studied one-to-any mapping between elements of corresponding fuzzy clusters of two consecutive frames. They have obtained a ranked list of all possible mappings by performing a state-space search. \subsection{How a stimuli is recognized in the environment?} Human subjects are often able to recognize a 3D object from its 2D projections in different orientations~\cite{bartoshuk1960mental}. A common hypothesis for this \textit{spatial ability} is that, an object is represented in memory in its canonical orientation, and a \textit{mental rotation} transformation is applied on the input image, and the transformed image is compared with the object in its canonical orientation~\cite{bartoshuk1960mental}. The time to determine whether two projections portray the same 3D object \begin{inlinelist} \item increase linearly with respect to the angular disparity~\cite{bartoshuk1960mental,cooperau1973time,cooper1976demonstration} \item is independent from the complexity of the 3D object~\cite{cooper1973chronometric} \end{inlinelist}. Shepard and Metzler~\cite{shepard1971mental} interpreted this finding as it follows: \textit{human subjects mentally rotate one portray at a constant speed until it is aligned with the other portray.} \subsection{State of the Art} The linear mapping transformation determination between two objects is generalized as determining optimal linear transformation matrix for a set of observed vectors, which is first proposed by Grace Wahba in 1965~\cite{wahba1965least} as it follows. \textit{Given two sets of $n$ points $\{v_1, v_2, \dots v_n\}$, and $\{v_1^*, v_2^* \dots v_n^*\}$, where $n \geq 2$, find the rotation matrix $M$ (i.e., the orthogonal matrix with determinant +1) which brings the first set into the best least squares coincidence with the second. That is, find $M$ matrix which minimizes} \begin{equation} \sum_{j=1}^{n} \vert v_j^* - Mv_j \vert^2 \end{equation} Multiple solutions for the \textit{Wahba's problem} have been published, such as Paul Davenport's q-method. Some notable algorithms after Davenport's q-method were published; of that QUaternion ESTimator (QU\-EST)~\cite{shuster2012three}, Fast Optimal Attitude Matrix \-(FOAM)~\cite{markley1993attitude} and Slower Optimal Matrix Algorithm (SOMA)~\cite{markley1993attitude}, and singular value decomposition (SVD) based algorithms, such as Markley’s SVD-based method~\cite{markley1988attitude}. In statistical shape analysis, the linear mapping transformation determination challenge is studied as Procrustes problem. Procrustes analysis finds a transformation matrix that maps two input shapes closest possible on each other. Solutions for Procrustes problem are reviewed in~\cite{gower2004procrustes,viklands2006algorithms}. For orthogonal Procrustes problem, Wolfgang Kabsch proposed a SVD-based method~\cite{kabsch1976solution} by minimizing the root mean squared deviation of two input sets when the determinant of rotation matrix is $1$. In addition to Kabsch’s partial Procrustes superimposition (covers translation and rotation), other full Procrustes superimpositions (covers translation, uniform scaling, rotation/reflection) have been proposed~\cite{gower2004procrustes,viklands2006algorithms}. The determination of optimal linear mapping transformation matrix using different approaches of Procrustes analysis has wide range of applications, spanning from forging human hand mimics in anthropomorphic robotic hand~\cite{xu2012design}, to the assessment of two-dimensional perimeter spread models such as fire~\cite{duff2012procrustes}, and the analysis of MRI scans in brain morphology studies~\cite{martin2013correlation}. \subsection{Our Contribution} The present study methodizes the aforementioned mentioned cognitive susceptibilities into a cognitive-driven linear mapping transformation determination algorithm. The method leverages on mental rotation cognitive stages~\cite{johnson1990speed} which are defined as it follows \begin{inlinelist} \item a mental image of the object is created \item object is mentally rotated until a comparison is made \item objects are assessed whether they are the same \item the decision is reported \end{inlinelist}. Accordingly, the proposed method creates hierarchical abstractions of shapes~\cite{greene2009briefest} with increasing level of details~\cite{konkle2010scene}. The abstractions are presented in a vector space. A graph of linear transformations is created by circular-shift permutations (i.e., rotation superimposition) of vectors. The graph is then hierarchically traversed for closest mapping linear transformation determination. Despite of numerous novel algorithms to calculate linear mapping transformation, such as those proposed for Procrustes analysis, the novelty of the presented method is being a cognitive-driven approach. This method augments promising discoveries on motion/object perception into a linear mapping transformation determination algorithm. \section{Method} \paragraph*{Basic manipulations vs. complex calculations} An infant has intuitive understanding of numbers and shapes, and can distinguish numerical and identity invariance of objects~\cite{izard2008distinct} regardless of object domain~\cite{wynn2002enumeration}; an ability that surprisingly extends to non-object entities (e.g., action~\cite{sharon1998individuation}). It lets us argue that an infant has basic understanding of transformations by primary perception of numbers and shapes. This intuitive ability is based on an early development of \textit{approximate number system}. This ability encourages present study to concentrate on basic operations and visual properties of shapes for the linear mapping transformation determination task. \paragraph*{Abstract vs. detailed representations} Of the entire environment within our visual range, only the essential information for the action in progress is prominent and the rest of the details are ignored~\cite{intraub1997representation} (aka cognitive inhibition~\cite{macleod2007concept}). For instance, while crossing a street only the information about the direction and speed of cars on the street are required; details such as plate number of the cars or clothes drivers wore, generally not consciously registered in the visual perception. This highlights the significant role that abstractions play in reducing the amount of information to be considered. Additionally, Ballard~\cite{ballard1997deictic} and Agre~\cite{agre1987pengi} further explained this ability as deictic strategies where eye fixation point is used to guide body movement (modeling the behavior at the embodiment level) while fixation point can rapidly change to different location~\cite{ballard1991animate}. In the following, we discuss how the proposed method abstracts images and determines linear mapping transformations between the images. \subsection{Shape representation} \label{section: Shapre Representation} The overall procedure of the presented method is independent from the color model of input shape (i.e., RGB, Cyan Magenta Yellow Key (CMYK), Hue Saturation Value (HSV), B\&W, binary, and etc.) Present study manipulates binary representation of shapes; while extension to other color models is straightforward and requires the modification of segment aggregation function (discussed in Section~\ref{section: Shape Segmentation}). The motivations of binarizing shapes are threefold. First, simple aggregation functions such as count can be applied on binary shapes. This improves the readability of the presented method, and avoids various color-model-based aggregation functions, which are beyond the scope of this manuscript. Second, real world objects incorporate spatial parameters and materials, introducing distal-to-proximal stimulus mapping challenges, and motion correspondence problem. The objectives of present study are to methodize principle cognitive susceptibilities for transformation determination, and the fact that binary shapes are not as sensible to aforementioned challenges as colorful shapes are, makes the binary model suitable for present study. The binary color model, is a common model among the motion correspondence problem studies (e.g.,~\cite{girod2013principles,hirschmuller2009evaluation}. Third, a \textit{distance transform} and topological skeleton extraction from binary shapes is straightforward as opposed to colorful shapes. These transformations are applicable alternatives for the segment aggregation functions discussed in Section~\ref{section: Shape Segmentation}. Despite of the promising methods that exist in literature (e.g., topological volume skeletonization~\cite{takahashi2004topological}, or various methods on distance transform algorithms~\cite{fabbri20082d}), to best of our knowledge, none of the proposed methods are comprehensive in the consideration of full explanatory real world object characteristics such as material, illumination, and surface reflectance. For instance, the pattern in a chro\-mium plated sphere in an image frame is indeed reflecting the surrounding environment and the sphere itself is determined implicitly~\cite{adelson2001seeing}. However, binary shapes mask similar properties encouraging least ambiguity. Present study manipulates binary shapes. In this regard, first a colored shape is converted to its corresponding gray-scale B\&W frame. The procedure is by estimating the luminance for every pixel $x$, $y$ of an image frame in RGB color model (e.g., panel A on Fig.~\ref{Figure: ShapeRepresentation}) as $L_{xy} = 0.2126R + 0.7152G + 0.0722B$ (see panel B on Fig.~\ref{Figure: ShapeRepresentation}). The resulted B\&W image frame is then binarized by normalizing $L_{xy}$ as $B_{xy} = \lfloor L_{xy} / 128 \rfloor$ for the binary pixel $B_{xy}$ (see panel C on Fig.~\ref{Figure: ShapeRepresentation}). Note that, $L_{xy} \in \{0, 1, \dots 255\}$, therefore, $B_{xy} \in \{0, 1\}$. \begin{figure}[!t] \centering \includegraphics[width=0.95\columnwidth]{Figures/ShapeRepresentation.pdf} \caption { \textbf{A:} input shape in RGB color model; \textbf{B:} intermediate gray-scale representation of the input; \textbf{C:} binary representation of the input. The proposed method manipulates the binary representation. } \label{Figure: ShapeRepresentation} \end{figure} \subsection{Shape segmentation} \label{section: Shape Segmentation} Shape segmentation is a well-studied subject in the field of image processing (e.g.,~\cite{pal1993review}). Segmentation commonly proceeds shape semantic analysis, thus highlighting the necessity of adapting segmentation method to the objectives of the study. Accordingly, the segmentation procedure is defined as it follows, which emphasizes the relative location of the pixels to facilitate liner mapping transformation. In present study, shapes are considered in two Dimensional (2D) Euclidean space. An image frame is segmented in $N$ \textit{sectors} and $M$ \textit{segments} (see Fig.~\ref{Figure: ShapeSegmentation}). Sectors are divisions of the image frame in \textit{angle} ($\varphi$) direction of polar coordinate system and are denoted by Euclidean unit vector $\vec{V}_n$ for $n\in\{1,2, \dots N\}$. Segments are isometric divisions of sectors in \textit{radius} ($r$) direction of polar coordinate system, denoted by the Euclidean unit vector $\vec{V}_{nm}$ for $m \in \{1,2, \dots M\}$. Accordingly, all sectors have equal number of segments, and segments are the smallest segmentation units. Let $I$ denote \textit{segmentation matrix} defined as it follows. \[ \mathbf{I} = \bordermatrix{ & \text{Segment} \; 1 & \dots & \text{Segment} \; M \cr \text{Sector} \; 1 & \vec{V}_{11} & \dots & \vec{V}_{1M} \cr \dots & \vdots & \ddots & \vdots \cr \text{Sector} \; N & \vec{V}_{N1} & \dots & \vec{V}_{NM}} \qquad \] \noindent Each element $V_{nm}$ is a tuple of $\langle x, y, \gamma \rangle$, where $\gamma$ is an aggregated value of a portion of the image frame which is represented by $V_{nm}$. Note that, the dimension of segmentation matrix is independent from the resolution of input image frame. The area represented by a segment $\vec{V}_{nm}$ is characterized by two boundaries environing it, and it is defined in polar coordinate system as it follows. \begin{align} r \in & \left] \frac{m-1}{M} , \frac{m}{M} \right] \\ \varphi \in & \left] \frac{360 (n-1)}{N} , \frac{360n}{N} \right] \end{align} A pixel at $r'$, $\varphi'$ Polar coordinate is a member of $\vec{V}_{nm}$ segment if and only if the coordinates of the pixel fall in the boundaries of the segment. Accordingly, the membership of a pixel at $x$, $y$ Cartesian coordinate to $\vec{V}_{nm}$ segment depends on the following condition. \begin{align} \tan^{-1}(\frac{x}{y}) \in & \left] \frac{360(n-1)}{N}, \frac{360n}{N} \right] \\ \vert \sqrt{x^2 + y^2} \vert \in & \left] \frac{m-1}{M}, \frac{m}{M} \right] \end{align} \begin{figure}[!t] \centering \includegraphics[width=0.7\columnwidth]{Figures/ShapeSegmentation.pdf} \caption { This shape illustrates segmentation with $N=4$ sectors and $M=3$ segments. A segment represents an area of the frame. For instance, $\vec{V}_{1,1}$, $\vec{V}_{1,2}$, and $\vec{V}_{1,3}$ respectively represent the areas shaded in yellow, blue, and cyan. Each area is composed of a set of pixels aggregated in $\gamma_{nm}$. For instance, $\gamma_{1,1}$, $\gamma_{1,2}$, and $\gamma_{1,3}$ denote the aggregation of pixels located at yellow, green, and blue shaded areas respectively. } \label{Figure: ShapeSegmentation} \end{figure} \subsection{Shape abstraction} A shape is abstracted by aggregating pixels in the area of each segment. The binary representation of a shape enables the use of simple \textit{count} aggregate function that is the number of pixels represented by the segment with the value of $1$. Let $\gamma_{nm}$ denote the aggregated value of segment $\vec{V}_{nm}$ which is defined as $S_{nm} = \vert \{B_{xy} \vert B_{xy} = 1 \} \vert$ for all $x$ and $y$ of pixels belonging to $\vec{V}_{nm}$ segment. The proposed method operates upon $\gamma_{nm}$ only, and is independent from $x$ and $y$ components. Therefore, the segmentation matrix, $I$, is modified as it follows; and is called \textit{abstraction matrix} ($\Gamma$). \[ \Gamma = \bordermatrix{ & \text{Segment} \; 1 & \dots & \text{Segment} \; M \cr \text{Sector} \; 1 & \gamma_{11} & \dots & \gamma_{1M} \cr \dots & \vdots & \ddots & \vdots \cr \text{Sector} \; N & \gamma_{N1} & \dots & \gamma_{NM}} \qquad \] The matrix is independent from the coordinates of each vector in 2D Euclidean space. However, the coordinates are implicitly approximated in the order of each of the vectors. For instance, $\gamma_{2m}$ refers to $m$-th segment on $2 \times (360/N)$-th sector. Finally the $\Gamma$ matrix is normalized using \textit{coefficient of variation} method. \subsection{Translation} \label{section: Translation} In most previous works such as Kabsch algorithm~\cite{kabsch1976solution}, translating input shapes to a position such that their centroid coincide with the center of coordinate system, or any specific coordinate, is a mandatory preprocessing. In general, translation superimposition is inevitable for both partial and full Procrustes superimpositions. The abstraction vectors of $\Gamma$ matrix are independent from $x$ and $y$ parameters, hence translation is not essential for the proposed method. However, an alternative application for translation is defined, which enables partial match determination between shapes. For this application, the translation process between the two input shapes could be interpreted as moving the segmentation center of one shape to coordinates pointed out by the segmentation vectors of the other shape (a process similar to translation superimposition in Procrustes analysis). Let $T_x$ and $T_y$ denote translation on $x$ and $y$ direction respectively; and $T=T_x \times T_y$ (Cartesian product of translations on $x$ and $y$ coordinates) be the set of all possible translations. The partial match between two shapes is determined by state-space search performed on the $T$ set, which is by applying all the transformations of $T$ on the second shape, and assessing the similarity between the first and second shape (see section~\ref{section: Similarity Measurement}). \subsection{Rotation} \label{section: Rotation} Rotation is a rigid body motion of a space that maintains at least one point at its original location; here we fix the segmentation center and move the segmentation vectors. In other words, given that a frame is partitioned into $360/N$ equal sectors, any $(360/N)j$ degrees of rotation for $j \in \{0, 1, \dots N-1\}$ is implemented as $j$ units of circular shift on $\Gamma$ (see Fig.\ref{Figure: ShapeRotation}). Following the aforementioned objective of using basic operations, rotation is implemented using circular shift operation on $\Gamma$. Accordingly, given $N$ sectors (given that rotation is a rigid body transformation, this operation is independent from $M$), a set of rotation angles that are implemented using circular shift on $\Gamma$, is defined as it follows. \begin{equation} R = \left\{ \frac{360}{N}i \,\middle|\, i= 0, 1, \dots N-1 \right\} \end{equation} The set $R$ defines a discreet set of rotation angles which are hierarchically extended to a continuous domain using an iterative procedure discussed in Section~\ref{section: Iteration}. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{Figures/ShapeRotation.pdf} \caption { Abstractions of two inputs, \textit{Shape A} and \textit{Shape B}, are given with partitioning parameters $N=6$ and $M=2$. There is a $120^\circ$ rotation difference between the two shapes. Given $N$ and $M$, the set of rotation angles based on $(360/60)j$ is $R = \{0^\circ, 60^\circ, 120^\circ, 180^\circ, 240^\circ, 300^\circ, 360^\circ \}$. These rotations are implemented using circular shift on $\Gamma$. Accordingly, if the transformation between inputs is $(360/60)j$ degrees of rotation, it is superimposable by $j$ circular shifts of $\Gamma$. Therefore, four times circular shift on $\Gamma_B$ results highest similarity value (e.g., $J=1$), which yields $120^\circ$ rotation as the best linear mapping transformation between \textit{Shape A} and \textit{Shape B}. } \label{Figure: ShapeRotation} \end{figure} \subsection{Similarity measurement} \label{section: Similarity Measurement} To determine the best linear mapping transformation, the presented method runs a state-space search on transformations, by transforming the second shape, and assessing it's similarity with the first shape. In general, let $\Delta = T \times R$ be the Cartesian product of $T$ translation and $R$ rotation. Note that, if the alternative application for translation introduced in Section~\ref{section: Translation} is of no interest, thence $\Delta = R$. The linear mapping transformations between \textit{Shape A} and \textit{Shape B}, are ranked based on the similarity coefficients $J(\Gamma_A, \delta\Gamma_B)$ for $\delta \in \Delta$. The similarity between any two elements $\gamma_{nm}^A \in \Gamma_A$ and $\gamma_{nm}^B \in \Gamma_B$ is measured using \textit{Jaccard similarity coefficient}, denoted $j(\gamma_{nm}^A, \gamma_{nm}^B)$, and is calculated as it follows. \begin{equation} j(\gamma_{nm}^A, \gamma_{nm}^B) = \frac{\vert \gamma_{nm}^A - \gamma_{nm}^B \vert}{\gamma_{nm}^A+\gamma_{nm}^B} \end{equation} The similarity between two abstracted shapes, denoted $J(\Gamma_A, \Gamma_B)$, is calculated as the sum of all pairwise Jaccard similarity indexes as it follows. \begin{equation} J(\Gamma_A, \Gamma_B) = \sum_{nm} j(\gamma_{nm}^A, \gamma_{nm}^B) \end{equation} Tanimoto similarity coefficient~\cite{rogers1960computer} is an alternative to Jaccard index; however since both methods yield similar results, Jaccard index is chosen. Additionally, other alternatives to Jaccard index are: S{\o}rensen similarity index~\cite{sorensen1948method}, Bray–Curtis dissimilarity~\cite{bray1957ordination} (also known as Czekanowski similarity index), Pearson product moment correlation, and earth mover's distance~\cite{rubner2000earth}. The similarity assessment between any two $\gamma_{nm}^A$ and $\gamma_{nm}^B$ is also optionally extended by a neighborhood operation. Let $j(N_k(nm, i))$ denote the Jaccard index of neighbor $N_k$ of element $n$, $m$ at $i$-th distance. The extended similarity coefficient $j'$ is calculated as it follows for $d$ neighbors: \begin{equation} \begin{split} j'(\gamma_{nm}^A, \gamma_{nm}^B) =& J(\gamma_{nm}^A, \gamma_{nm}^B)\\ +& \sum_{i=1}^{d} \left[ \log_{d+2} (d+2-i) \sum_{K_i} j(N_k(nm, i)) \right] \end{split} \end{equation} \noindent This is an adaptive logarithmic neighborhood operation which assigns heavier weight to closer elements than remotes. \subsection{Iteration} \label{section: Iteration} How long does it take us to understand the gist of a shape? Henderson et al.~\cite{henderson1998eye} obtained a typical scene fixation of 304ms with 100\% luminance, and Rayner~\cite{rayner1998eye} estimated $233$ms fixation time for an adult reading normal text (Kowler et al.~\cite{kowler1987reading} measured fixation patterns for reading reversed letters). The conceptual and perceptual information understood from a glance at an image frame is a function of the glance duration. Fei-Fei et al.~\cite{fei2007we} studied the perception depth over time. He resulted that we perceive sensory information (e.g., dark and light) in roughly $50$ms, at $107$ms we determine more semantic aspects (e.g., people, room, urban, and water) with considerable accuracy, it takes $150$ms to determine an object (e.g., dog), and at $500$ms we achieve maximum perception (e.g., identify dog as German shepherd). Greene et al.~\cite{greene2009briefest} conducted similar study and established a perceptual benchmark to types of information we perceive during early perceptual processing; and they inferred that it takes $63$ms to determine naturalness of an image frame and $78$ms to understand whether its forest or not. Such global-to-local view cognitive abilities inspired the present study to see transformation determination as a multi-step procedure an opposed to the some single-step methods. The longer we are exposed to a shape, the more we understand from it; in other words, the amount of information we perceive from a shape is the function of the number of image processing iterations performed on the shape. Accordingly, present study defines a converging heuristic iterative method that determines the gist of shapes at initial approximation (highest abstraction that corresponds to sensory information such as light/dark classification~\cite{fei2007we}), and by translation superimposition followed by similarity assessment procedure, the most abstracted transformation is determined. Then, segmentation parameters are iteratively incremented. At each iteration, a new abstraction with more details than its preceding abstraction is made. Also, at each iteration, approximated transformations of the preceding iteration are tuned using more detailed abstraction. This process is analogous to: from light, through animal and dog, to German shepherd~\cite{fei2007we,greene2009briefest}. In general, the proposed method determines an initial approximation of best mapping transformations, and tunes those through successive iterations. The permutations of transformations at each iteration form a state-space that is traversed in best-first search fashion. This approach follows the traits of \textit{Greedy algorithm}~\cite{cormen2009introduction} that determines local optimal choice. To best of our knowledge, cognitive community descriptions of processing segmentations, well overlaps local optimal search method. However, one may consider updating the procedure to follow traits of global optimal search methods to best adapt the application requirements. Let $l \in \mathbb{N}$ denote an iteration coefficient which is initialized with a user-defined parameter $\omega$. Let $\Gamma_A^l$ be abstraction of shape $A$ at iteration $l$ with $N_l = 2^l$ sectors and $M_l=2^l$ segments. For the purpose of readability of the method, the number of sectors and segments are chosen to be identical; however, the extension of the method to support divers parameters is straight-forward. According to this generalization, the amount of details represented by each abstraction grows exponentially through the iterations. Also, the growth rate can be update to best adapt the application requirements by changing either the growth function or considering $l\in \mathbb{R}$. Let $\Delta_l$ be the set of all transformations to be applied on $\Gamma_A^l$ at iteration $l$ for $\Delta_\omega = T \times R$. Let $\Upsilon_l = \{\delta_1 \dots \delta_i \dots \delta_\epsilon \}$ be the set of top-$\epsilon$ transformations (i.e., highest similarity) at iteration $l$ for $\epsilon$ being a user-defined parameter. The iteration $l$ tunes best transformations of iteration $l-1$. Accordingly, $\Delta_l$ consists of all $\Upsilon_l$ tunes which is formally defined as it follows for the user-defined parameter $\lambda$ that specifies the tuning range. \begin{equation} \forall j \in \{0, 1, \dots \lambda\} \colon \Delta_l = \{(2^{l-\omega} \delta_{(l-1)i} ) \pm j \} \end{equation} For instance, suppose $\omega=3$ then $N=8$ and $M=8$ and assuming only rotation superimposition, we obtain $\Delta_3 = \{0,1,2, \dots 7\}$ which are the number of circular shifts on $\Gamma_A$ that corresponds to $\{0^\circ, 45^\circ, 90^\circ, \dots 315^\circ\}$. Suppose $\epsilon=1$, $\Upsilon_3={2}$, and $\lambda=2$, accordingly $\Delta_4$ is calculated as it follows. $\Delta_4=\{(2^{4-3} \times 2) \pm \{0, 1\}\}$ \noindent which corresponds to $\{67.5^\circ, 90^\circ, 112.5^\circ\}$. The pseudo code of the iteration procedure is given in Algorithm~\ref{Algo: IterationAlgorithm}. \begin{algorithm} \caption{Iteration Algorithm}\label{Algo: IterationAlgorithm} \begin{algorithmic}[1] \Procedure{Iterate}{} \State $l \gets \omega$ \State $\Delta_l \gets T \times R$ \State $N_l \gets 2^l$ \State $M_l \gets 2^l$ \State \textbf{Build} $\; \Gamma_A^l \;$ and $\; \Gamma_B^l \;$ \State $\Upsilon \gets \text{apply} \; \Delta_l \; \text{on} \; \Gamma_A^l \; \text{and get top-}\epsilon \; \text{transformations}$ \If {$l < \max l$} \State $l \gets l+1$ \State $\Delta_l \gets \text{all tunes of} \; \Upsilon_{l-1}$ \State \textit{Goto} 03 \Else \State report $\;\Upsilon_l\;$ as best mapping transformations \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Validation and verification} \label{section: Validation} If the difference between two input image frames is $\theta^\circ$ and $\theta \in \Delta_\omega$, according to Section~\ref{section: Rotation}, then $\theta$ is determined using circular shifts on $\Gamma$. However, if $\theta \notin \Delta_\omega$, then $\theta$ is determined using the iterative process. To prove that, consider the following hypothesis: \begin{equation} \exists n \in \{1, 2, \dots N \} \colon \frac{360}{N}(n-1) < \theta \leq \frac{360}{N}n \end{equation} \noindent By definition of segmentation, these are the boundaries of $n$-th region which is divided into $M$ equal segments. According to the hypothesis, $\theta$ belongs to one and only one region, therefore as much as the range is narrowed-down, we get closer and closer to $\theta$ (the motivation of iteration procedure). At each iteration, the results of former iteration are tuned until a result with a user-defined accuracy ($\rho$) is determined. The algorithm performs maximum $c$ iterations which is calculated as it follows. \begin{itemize} \item The segmentation area should be as narrow as $\rho^\circ$, therefore: \begin{equation} \rho = \frac{360}{N}n- \frac{360}{N}(n-1) \rightarrow \rho = \frac{360}{N} \end{equation} \item Segmentation grows exponentially through iterations, hence: \begin{equation} N_l = 2^l \rightarrow \frac{360}{\rho} = 2^l \rightarrow l = \log_2 \frac{360}{\rho} \rightarrow l = \lceil \log_2 \frac{360}{\rho} \rceil \end{equation} \end{itemize} \section{Results} The accuracy of proposed method is assessed using $100$+ pairs of image frames with diverse resolutions spanning from $50 \times 50$ to $1000 \times 1000$ pixels, and including different categories (e.g., animals, cars, airplane, people, and abstract images). Additionally, the impact of noisy image frames to the accuracy of the proposed method, is assessed using image frames with up to $70$\% of random noise The evaluations are designed as it follows \begin{inlinelist} \item \textit{Shape A} is a BMP image, or an abstract shape composed of lines, circles, and random noise drawn using features integrated in the implemented tool \item \textit{Shape B} is obtained by $\theta$ degrees rotation of \textit{Shape A} plus a random percentage of noise \item \textit{Shape A} and \textit{Shape B} are used as inputs for the proposed method \item the central tendency of determined rotations is measured as weighted arithmetic mean among top-3 (WM3) rotations, and it is compared with $\theta$. \end{inlinelist} The assessments are performed with default parameters which are: $\omega = 3$, $\lambda=10$, $\epsilon=10$, and the similarity between two segments is calculated excluding neighbor segments. The results of the experiments are discussed as it follows. \subsection{Top transformations converge rapidly} The fundamental argument of iterations is to progressively increase the level of details on the image frame abstraction, and accordingly, iteratively improve the accuracy of the calculated approximated transformations, until a user-defined precision criterion is met. Weighted sample variance among top-3 (WV3) approximated transformations provides a measure of dispersion on top approximations. The WV3 reflects the variability in the top-3 approximated transformations, such that: a small WV3 suggests a very reliable WM3, while a large WV3 reflects an uncertainty about the ``best'' linear mapping transformation. According to the experiments, WV3 gets closer to $1$ in a few iterations which yields (a) rapid convergence among top approximated transformations (this confirms the validation of iteration procedure discussed in Section~\ref{section: Validation}), (b) $\textit{WV3} \approx 1$ in few iterations ($>6$) confirms the accuracy of rapidly converged approximated transformations. \begin{figure*}[!ht] \centering \includegraphics[width=0.9\textwidth]{Figures/RotationExample.pdf} \caption { \textit{Shape A} is loaded from a BMP image, and \textit{Shape B} is obtained by $270^\circ$ rotation of \textit{Shape A}. The $\Gamma$ matrices of both shapes at different iterations are presented by circular heatmaps. T: determined transformation, S: standard deviation among top-3 determined transformations, D: difference between actual and determined transformations. The normalized similarity index $J(\Gamma_A, \delta \Gamma_B), \forall\delta \in \Delta$ is plotted using a circular heapmap for all the iterations, see panels A2 and B2. } \label{Figure: 270} \end{figure*} \subsection{Tuning out the cognitive noise} Selective and visual attention filter irrelevant stimuli to the subject's task by mechanisms such as habituation and cognitive inhibition. There have been promising efforts to model the ability (e.g.,~\cite{tsotsos1995modeling}) since the \textit{spotlight}~\cite{eriksen1972temporal} and \textit{zoom lens}~\cite{eriksen1986visual} models. Additionally, perceived visual information are function of an observer's distance to an object. This aspect has variety of applications namely is Olivia et al.~\cite{oliva2006hybrid} that incorporates this aspect with hybrid images. A hybrid image is composed of two image frames with low and high spatial frequencies, such that either is perceived as noise as a function of observer's distance to the hybrid image frame. In other words, the image of high spatial frequency is dominant at closer distance, while the image with low frequency is perceived at far distance. Whether the noise is a masked image or it is an irrelevant stimuli, it does not impact the perceived information from an image frame. Therefore, the performance of proposed method in approximating linear mapping transformation using noisy image frames, is assessed by experiments where a percentage of \textit{Shape B} is covered with random noise. To this extend, an experiment of four tests, $T1$, $T2$, $T3$, and $T4$ is conducted (see Fig.~\ref{Figure: NoiseImpact}). The tests have \textit{Shape A} in common which is a BMP image of a bee. The \textit{Shape B} is created by $234^\circ$ rotation of \textit{Shape A}, and differs among test in the amount of incorporated random noise. The subject in the \textit{Shape A} (i.e., the bee) is represented by $\approx230$K pixels (of $584$K pixels of the image frame). A portion of $120$K pixels (out of the $\approx230K$ pixels) is subject to random noise. This portion is intentionally chosen to cover the body of the bee which presents the majority of perceptible features of the subject. Given that the pixels are binary and the figure is represented by pixels of value 1 (see Section~\ref{section: Shapre Representation}), the random noise is created by setting the value of a random pixel to $1$ in the subject-to-noise portion of \textit{Shape B}. The random noise is added through an iteration of $0$, $5$K, $50$K, and $500$K random pixel selections (a pixel can be selected multiple times) respectively for $T1$, $T2$, $T3$, and $T4$ (see Fig.~\ref{Figure: NoiseImpact}); such that, the majority of perceptible features on \textit{Shape B} are covered with random noise at $T4$. The initial segmentation parameter ($\omega = 3$) provides a limited number of variant initial approximations (see Section~\ref{section: Iteration}). Therefore, the WV3 at first iteration (i.e., $l=3$) of the $T1$, $T2$, $T3$, and $T4$ show relatively high dispersion, which indicate the inconsistency of WM3 (see Fig.~\ref{Figure: NoiseImpact}). The initial approximations are tuned at second iteration (i.e., $l=4$) which improve WV3 tenfold (from $118$ to $18$) for the $T1$, $T2$, and $T3$. Despite of a minor discrepancy, WM3 of the tests $T1$, $T2$, and $T3$ are relatively close to actual transformation (i.e., $234^\circ$). However, the considerable noise of $T4$ prevents its WV3 convergence at the same rate as of $T1$, $T2$, and $T3$ (see Fig.~\ref{Figure: NoiseImpact}). The third iteration (i.e., $l=5$) improves approximations, and it brings WV3 of all the test to a same scale, and accordingly provides reliable WM3. Further iterations squeeze the approximations and reach to $\text{WV3}=1.1$ for all tests at sixth iteration (i.e., $l=8$) which indicates a considerable consistency of WM3. Therefore, the method determines WV3 and WM3 for all test at the same scale, given the considerable amount of noise (specially at $T4$). This confirms that even a low amount of perceptible features of the figures is adequate to tune the initial approximations to reliable approximations. For details of the noise impact on other approximations, refer to Supp. Fig.2.17-20. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{Figures/NoiseImpact.pdf} \caption { Evaluation of random noise impact on transformation determination.s } \label{Figure: NoiseImpact} \end{figure} \subsection{Image resolution defines maximum number of iterations} When abstracting an image frame, up until a certain iteration, a segment consists of multiple pixels. However beyond that iteration, a segment might be smaller than a pixel (i.e. one pixel belongs to multiple segments). To determine a segment to which a pixel belongs to, the method rounds the position of the pixel. Therefore, beyond a certain iteration, the rounding procedure could potentially increase the distance between the abstractions of two image frames. In such condition, the WV3 converges up-until a certain iteration, and it is saturated beyond that iteration, and accordingly is the WM3 (see Supp. Fig.2.22-23). Therefore, maximum number of iterations, and accordingly the number of \textit{segments} and \textit{sectors} are the function of shape resolution. \subsection{Pin-pointed transformation vs. condensed approximations} The linear mapping transformation between two shapes is determined either as a single transformation with considerable discrepancy with the rest of the approximations (e.g., panel A on Fig.~\ref{Figure: 270}), or a condensed distribution of approximated transformations around actual transformation (e.g., panel B on Fig.~\ref{Figure: 270}). This behavior originates from the discreet representation of image frames (raster graphics); such that, when drawing a \textit{Shape B} from \textit{Shape A}, a pixel of \textit{Shape A} is mapped to a rounded position on \textit{Shape B}. Therefore, pixels of \textit{Shape A} could overlap as mapped on \textit{Shape B}. For instance, the two pixels at $\langle x_1=4, y_1=4 \rangle$ , $\langle x_2=4, y_2=5 \rangle$ belonging to the segment/sector $V_{nm}$ of \textit{Shape A}, with $70^\circ$ rotation, respectively map to positions $\langle x_1^\prime = -0.562, y_1^\prime = 5.628 \rangle$ and $\langle x_2^\prime = -1.33, y_2^\prime = 6.262 \rangle$. As the coordinates are rounded, the two pixels map to position $\langle -1, 6 \rangle$ belonging to the segment/sector $V_{n'm'}$ of \textit{Shape B}. Therefore, two pixels of \textit{Shape A} map to one pixel on \textit{Shape B} (surjective linear transformation). Accordingly, as abstracting the shapes using aggregation function \textit{count} (see Section~\ref{section: Shape Segmentation}), the abstraction parameters are calculated as it follows: $\gamma_{nm} = 2$ and $\gamma_{n'm'} = 1$ (e.g., see comparison of $\gamma$ value distribution plots on Supp. Fig.2.1-22). Hence, comparing $\gamma_{nm}$ and $\gamma_{n'm'}$ results to $j(\gamma_{nm}, \gamma_{n'm'}) = 0.33$ as opposed to expected $j(\gamma_{nm}, \gamma_{n'm'}) = 1$. Such scenarios prevents ``pin-pointing'' the actual transformation (in this case $70^\circ$) and rather provides a condensed distribution of transformations around actual transformation (e.g., see panel B on Fig.~\ref{Figure: 270}). \subsection{A small similarity is sufficient to determine a reliable linear mapping approximation} Ideal scenario for comparing two shapes is when there exist a one-to-one correspondence (injective/surjective) between pixels of two the shapes. However, for variety of reasons discussed as it follows, the rotation function on raster graphics is surjective. For instance, rotation function may map multiple pixels of \textit{Shape A} to one pixel of \textit{Shape B}, causing a percentage of deformation on \textit{Shape B} (e.g., see supp. Fig.2.21), and preventing ``pin-pointing'' actual transformation (as above-discussed). Additionally, shapes are possibly subject to noise, which would prevent one-to-one correspondence between the two shapes (non-surjective). Moreover, \textit{Shape A} may consist of congruent figures (e.g., two side-by-side circles of the same radius), and if \textit{Shape B} is determined by $\theta^\circ$ rotation of \textit{Shape A}, then in addition to $\theta^\circ$, multiple rotation angles may also map the congruent shapes on each other. In such cases, actual transformation is determined using incongruent elements (e.g., saddle area, or pedal of the bicycle on Fig.~\ref{Figure: 270}). Such prominent details not only improve approximations for congruent shapes, but are also advantageous when the majority of the shape is covered by noise (e.g., Fig.\ref{Figure: NoiseImpact}) or is deformed (e.g., Supp. Fig.2.21). The method discussed in present study, minimizes the impact of such discrepancies on linear mapping transformation determination, by calculating the similarity of two corresponding segments independently from the rest of the segments (an adaptive neighborhood operation of custom range is optionally enabled). Therefore, a higher similarity between few segments is adequate to determine mapping transformation with considerable accuracy. The experiments on deformed, congruent, and noisy image frames illustrate the accuracy of the proposed method on such scenarios.
1709.00068
\section{Introduction: Gravity As Torsion Instead of Curvature} \label{intro} Einstein's theory of general relativity recently celebrated its centenary in 2015, and has so far passed all experimental and observational solar system tests with flying colors. Nevertheless, there remain a few mysteries in astrophysics and cosmology that could be a sign that general relativity might need to be modified on larger scales. On the one hand for galaxies and clusters there are discrepancies that could be explained by large amounts of dark matter or some alternative theory~\cite{1101.1935,1611.02269}. Another issue is the observation that apparently the expansion of our Universe is currently accelerating~\cite{perlmutter, riess}. A simple explanation is the presence of a positive cosmological constant $\Lambda$, so that the Universe is asymptotically de Sitter in the far future, not asymptotically flat, but other possibilities cannot yet be excluded. These problems motivated the search for a modified theory of gravity, which would agree with general relativity in the regimes where the latter had been well-tested, but would nevertheless better account for the larger scale observations, maybe giving a more ``natural'' explanation for the cosmic acceleration, \emph{without} the need for $\Lambda$. The literature has no shortage for various gravity theories, many of which modify general relativity at the level of the action. The most straightforward example is $f(R)$ gravity, in which the scalar curvature $R$ in the Hilbert-Einstein action is replaced by a function $f(R)$. Another class of gravity theories, known as teleparallel gravity, stands out among the rest, for it considers a connection that is curvatureless but torsionful\footnote{We emphasize that torsion, like curvature, is a property of a given connection. Even in a theory with both curvature and torsion, such as the Einstein-Cartan theory, torsion has a clear geometric meaning, and it is best to treat it as such (from the point of view of well-posedness of the evolution equations~\cite{nesterwang}), rather than ``just another field'' coupled to standard general relativity.}. Recall that in general relativity (GR), the metric compatible Levi-Civita connection has nonzero curvature but vanishing torsion. Gravity is therefore modeled entirely by the effect of spacetime curvature. It may therefore seem rather surprising that there exists a teleparallel equivalent of general relativity (TEGR, or simply GR$_\parallel$), which by construction has zero curvature. For a detailed discussion see~\cite{Pereira.book}. TEGR models gravity as a torsional effect, but is otherwise completely equivalent to general relativity, at least at the action level~\cite{Hayashi, Pereira, Kleinert, Sonester}. Since curvature is identically zero in a teleparallel theory, there is a \emph{global absolute parallelism}. In such theories the torsion tensor includes all the information concerning the gravitational field.\footnote{Here we are considering the standard metric compatible type of teleparallel theory. There are more general alternatives with both torsion and non-vanishing non-metricity~\cite{Nester:1998mp, Adak:2005cd, Adak:2008gd} which also merit consideration.} By suitable contractions one can write down the corresponding Lagrangian density --- assuming an invariance under general coordinate transformations, global Lorentz and parity transformations, and using quadratic terms in the torsion tensor $T{}^\rho{}_{\eta\mu}$~\cite{Hayashi}. There is a certain combination of considerable interest, the so-called TEGR ``torsion scalar'': \begin{equation}\label{ts1} T = \frac{1}{4} T{}^\rho{}_{\eta\mu}T_\rho{}^{\eta\mu} + \frac{1}{2} T{}^\rho{}_{\mu\eta}T{}^{\eta\mu}{}_\rho - T_{\rho\mu}{}^\rho T{}^{\nu\mu}{}_\nu, \end{equation} which is equivalent to the scalar curvature obtained from the standard Levi-Civita connection, up to a total divergence. A more general quadratic ``torsion scalar'' can be obtained by relaxing the coefficients: \begin{equation}\label{abc} T[a,b,c] = a T{}^\rho{}_{\eta\mu}T_\rho{}^{\eta\mu} + b T{}^\rho{}_{\mu\eta}T{}^{\eta\mu}{}_\rho + c T_{\rho\mu}{}^\rho T{}^{\nu\mu}{}_\nu. \end{equation} Only in the case $a=1/4$, $b=1/2$ and $c=-1$ does the theory becomes equivalent to GR. Finally, $f(T)$ gravity arises as a natural extension of TEGR, if one generalizes the Lagrangian to be a function of $T$~\cite{0812.120, 1005.3039}, see~\cite{1511.07586} for a review. One could in fact have other even more generalized teleparallel theories in addition to $f(T)$ theory. Here, let us emphasize some aspects of teleparallel gravity theories. As explained above, a \textit{teleparallel} theory is one which is described by a \textit{connection} which is \textit{flat}, i.e., curvature vanishes. On the other hand, one can also formulate a theory described purely in terms of the \emph{frame} (or the co-frame), with no mention of \emph{any} connection. In 4-dimensions this is known as a tetrad theory. It turns out that frame theories and teleparallel theories are essentially equivalent. People have long understood this to be the case, but we found that there are a couple of subtleties that have not been addressed in the literature. \emph{Firmly establishing this equivalence is the main objective of the present work}. If one has a teleparallel geometry, starting at any point, one can choose there a basis for the tangent space. Then one could parallel transport it along any path to every other point in the space. Since the curvature vanishes, the transport is unique, independent of the path. This constructs a \emph{smooth} global ``preferred'' frame field\footnote{Hence, when trying to solve the equations, one \emph{cannot} hope to get any sensible results by choosing an ansatz frame that is singular, for example the spherical frame (unless one introduces suitably flat connection coefficients, which cancel the singularity in the frame, so that the torsion tensor is smooth. For a concrete example of this see Section VII in Obkuhov \& Pereira~\cite{OP}).}, in which the connection coefficients vanish, thus one gets a pure frame description --- unique up to an overall constant linear transformation. Conversely, if one has a ``preferred'' smooth frame field, it allows one to introduce a specific parallel transport rule: namely that vectors are transported along paths by keeping their components constant in this frame. This transport rule is path independent. The associated curvature vanishes. The resulting connection will have vanishing coefficients in this preferred frame. (This is what is meant by having a connection which is zero.) Note that geometrically these concepts make sense without any need for a metric tensor (the torsion tensor, as well as the curvature tensor, can be defined for any connection without using any metric). Let us suppose we also have a Lorentzian metric (this gives the spacetime a local causal structure), then there is a distinguished subset of possible teleparallel connections which are metric compatible. With such a connection, if one chooses at one point an orthonormal frame, its parallel transport to all other points will give a global orthonormal frame field. Conversely, given a global orthonormal frame field it determines a metric compatible teleparallel connection. Furthermore any global frame field determines a metric by \textit{defining} the frame to be orthonormal. One crucial aspect that one has to check for \emph{any} theory of gravity is the number of degrees of freedom it contains. The number is two for general relativity in 4-dimensions\footnote{The number of degrees of freedom for GR in $n$-dimensions is $n(n-3)/2$. In the language of waves, this is the number of polarizations. It is well-known that in 3-dimensions general relativity becomes a topological theory, in which there is no propagating degrees of freedom, and thus also no gravitational waves.}. Although TEGR has the same degrees of freedom as general relativity\footnote{There are subtleties even in the TEGR case --- in TEGR one physical system is represented by a whole gauge equivalence class: an \emph{infinite set of geometries}, each with its own torsion and distinct teleparallel connection. In the pure frame representation, the gauge freedom representation looks simply like local Lorentz gauge freedom, however it really corresponds to a whole \emph{equivalence class} of teleparallel geometries, with gauge equivalent torsions.}, a generic teleparallel theory does not. In the case of $f(T)$ gravity, Miao Li et al.~\cite{Li:2011rn} --- by utilizing the Dirac constraint technique along with Maluf's Hamiltonian formulation~\cite{0002059} --- concluded that in 4-dimensions there are generically 5 degrees of freedom: namely, in addition to the usual 2 degrees of freedom in the metric, the tetrad would have 3 degrees of freedom. For a more intuitive understanding of why 5 degrees of freedom could be expected in such a theory, see Sec.2 of~\cite{1412.8383}. {\color{black}(Recently, a Hamiltonian analysis of $f(T)$ gravity was carried out by Ferraro and Guzm\'an \cite{1802.02130}. They claimed that $f(T)$ gravity only contains 3 degrees of freedom, not 5. According to our understanding their analysis has some problems, but clarifying these issues is beyond the scope of the present work.)} The extra degrees of freedom in $f(T)$ gravity are highly nonlinear, as they do not manifest even at the level of second order perturbation in a FLRW background~\cite{1212.5774}. In fact, it is expected that they will give rise to problems such as superluminal propagation and the ill-posedness of the Cauchy problem in $f(T)$ gravity, i.e., given an initial condition the evolution equations could not uniquely determine the future state of the system. This would be a disaster because it means that physics has lost its predictive power. For comprehensive discussions of this issue, see~\cite{1303.0993, 1309.6461, 1412.8383}. In view of said issue, it is important to further understand the degrees of freedom in $f(T)$ gravity and other teleparallel theories. As mentioned, this class of physical theories can be regarded in (at least) two different ways: as a theory formulated purely in terms of an (orthonormal) frame, or as a theory with both a frame and a flat connection. One could dynamically achieved the flat connection condition by using a Lagrangian multiplier to enforce vanishing curvature. The frame-connection-multiplier formulation is a particular subclass of the general metric-affine gravity theories, see \S 5.9 in~\cite{MAG}. As we remarked, it has generally been understood that one can achieve the desired result using this Lagrange multiplier approach. Upon examination we found that there are some subtle aspects, which have not all been addressed in the existing literature, including~\cite{MAG, 0002022, 0006080, OP}. We note that the issue is not trivial. The vanishing curvature constraint depends on the connection coefficient and its first partial derivative. In Classical Mechanics, it is well known that one cannot in general achieve the desired result by introducing into the action with Lagrange multipliers a constraint which depends on the \emph{time derivatives} of the dynamical variables. The standard counter-example of such a \emph{non-holonomic} constraint is ``rolling without slipping'' (for discussions see~\cite{Goldstein} pp 14--16 and~\cite{BatesNester2011}). Likewise in field theory, one cannot \emph{in general} introduce via Lagrange multipliers constraints which depend on the derivatives of the field, however sometimes this does produce the desired result. \emph{We do not know of any general results, so we need to check each case carefully}. \section{The Lagrange Multiplier Approach} \label{lagrangemultiplier} The representation in terms of a \emph{non-vanishing} teleparallel connection may give some insights. Enforcing vanishing curvature via a Lagrange multiplier has been treated in many sources including Kopczy\'nski~\cite{Kop}, Hehl et al.~\cite{MAG} and Blagojevi\'c~\cite{A}. See also ~\cite{0002022} and ~\cite{HHSS}. This can be done even for the most general metric-affine gravity theory or for the \emph{a priori} metric compatible case such as $f(T)$ gravity. Our formulation here will essentially be like that of the Obukhov-Pereira metric-affine formulation~\cite{OP}. It is straightforward to restrict that approach to our needs by completely eliminating the metric using orthonormal frames. There are interesting technical details about how the number of independent components of the dynamical equations work out so that this approach is {\em equivalent} to the approach with {\em a priori} vanishing connection. The equivalence has been, until now, not explicitly shown at this level of detail, although most of the underlying ideas were implicit in the earlier foundational works of Blagojevi\'c and Nikoli\'c~\cite{0002022}, as well as those of Blagojevi\'c and Vasili\'c~\cite{0006080}, and Obukhov-Pereira~\cite{OP}. The Lagrange multiplier formulation was also mentioned in a more recent work by Golovnev, Koivisto, and Sandstad~\cite{1701.06271}, but the counting of the number of components was not carried out. We will demonstrate the equivalence in this section. However, let us first clarify what it means to \emph{not} set the connection to be zero. In the usual formulation of $f(T)$ gravity, the Weitzenb\"{o}ck connection is defined by \begin{equation} \overset{\mathbf{w}}{\Gamma}{}^\lambda{}_{ \nu\mu} = \tilde{e}_A^{~\lambda}\partial_\nu \tilde{e}^A_{~\mu}. \end{equation} This expression actually corresponds to a very specific choice of frame in which the frame connection coefficient, often referred to as the \emph{spin connection}, vanishes --- hence we have used $\tilde{e}$ to denote such a preferred \emph{orthoparallel} frame (Kopczy\'nski~\cite{Kop} called such frames OT, standing for ``orthonormal teleparallel''). However, the Weitzenb\"{o}ck connection is well-defined even if we keep the frame connection nonzero~\cite{1510.08432, Pereira.book}: \begin{equation} \label{Weitzenb1} \overset{\mathbf{w}}{\Gamma}{}^\lambda{}_{ \nu\mu} =e^\lambda{}_A \partial_\mu e^A{}_\nu+e^\lambda{}_A\omega^A{}_{B\mu} e^B{}_\nu, \end{equation} where $\omega^A{}_{B\mu}$ is the frame connection coefficient defined via $\omega^A{}_B=\omega^A{}_{B\mu} d x^\mu$. In this work, Greek indices $\left\{\mu,\nu,\cdots \right\}$ run over all spacetime local coordinates, while capital Latin indices $\left\{A,B, \cdots \right\}$ refer to the orthonormal frame. We remark that this formula is not special to the Wetzenb\"ock connection. It takes any connection components $\omega$ in the frame with upper case Latin indices to the components of the same connection in a frame with Greek indices (which are holonomic here). There is in general no special restriction on the connection. For our purpose, $\omega$ corresponds to a flat, Wetzenb\"ock, connection but need not vanish. One could then calculate the torsion tensor, the torsion scalar $T$, the action given the explicit form of the function $f(T)$, and the field equations, using the above Weitzenb\"{o}ck connection (\ref{Weitzenb1}). For instance the torsion tensor now reads \begin{equation} \label{torsion2} {T}^\lambda_{\:\mu\nu}=\overset{\mathbf{w}}{\Gamma}{}^\lambda{}_{ \nu\mu} \overset{\mathbf{w}}{\Gamma}{}^\lambda{}_{\mu\nu}. \end{equation} However we do not gain anything new, since all this just says that the connection 1-form is non-zero if we go to another basis that is different from the orthoparallel frame. In fact, we can work in the Lagrange multiplier approach, and see that the degrees of freedom of the theory remains unchanged. To be more specific, our claim is this: \begin{quote} \emph{The amount of information in any teleparallel theory of gravity in which curvature is constrained to vanish via a Lagrange multiplier is the same as that in the formulation in which the connection is set to zero a priori.} \end{quote} To see this, let us first consider a general Lagrangian density (i.e., a 4-form in 4-dimensions) of the form\footnote{For simplicity we do not discuss any matter source fields; they do not play an essential role in the issue we are addressing.} \begin{equation} \mathcal{L}(g, \theta, Dg, T, R, \lambda), \end{equation} where $g$ is the metric tensor, $Dg$ is the covariant differential of the metric, $T$ is the torsion 2-form, $R$ is the curvature 2-form, and $\lambda$ is a Lagrange multiplier, all of which are written abstractly for convenience. We ``eliminate'' $g$ as an independent variable via \begin{equation} g=\eta_{AB} \theta^A \otimes \theta^B, \qquad \eta_{AB}={\rm diag}(-1,+1,+1,+1), \end{equation} where $\theta^A$ is the orthonormal (co-)frame. The torsion 2-form and curvature 2-form are related to the orthonormal frame and the connection 1-form by \begin{equation} T^A = d \theta^A + \omega^A_{~B} \wedge \theta^B = \frac{1}{2}T^A_{~\mu\nu}dx^\mu \wedge dx^\nu, ~~\text{and} \end{equation} \begin{equation} R^A_{~B} = d\omega^A_{~B} + \omega^A_{~C} \wedge \omega^C_{~B} = \frac{1}{2} R^A_{~B\mu\nu}dx^\mu \wedge dx^\nu. \end{equation} We also impose metric compatiblity as an \emph{a priori} constraint: \begin{equation}0\equiv Dg_{AB}:=dg_{AB}-\omega_{AB}-\omega_{BA}=-2\omega_{(AB)}.\end{equation} Then $\omega^{AB}$ and $R^{AB}$ are \emph{antisymmetric}: $\omega^{AB}\equiv\omega^{[AB]}$, $R^{AB}\equiv R^{[AB]}$. Working only with covariant objects, the variation of the Lagrangian density is \begin{equation} \delta \mathcal{L} = \delta \theta^A \wedge \frac{\partial \mathcal{L}}{\partial \theta^A} + \delta T^A \wedge \frac{\partial \mathcal{L}}{\partial T^A} + \delta R^A_{~B} \wedge \frac{\partial \mathcal{L}}{\partial R^A_{~B}}+\delta\lambda^A_{~B}\wedge \frac{\partial \mathcal{L}}{\partial \lambda^A_{~B}}, \end{equation} where \begin{equation} \delta T^A = D\delta \theta^A + \delta \omega^A_{~B} \wedge \theta^B, ~~\text{and} \end{equation} \begin{equation} \delta R^A_{~B} = D \delta \omega^A_{~B}. \end{equation} Hence \begin{flalign} \delta \mathcal{L} =& d \left(\delta \theta^A \wedge\frac{\partial \mathcal{L}}{\partial T^A} + \delta \omega^A_{~B} \wedge\frac{\partial \mathcal{L}}{\partial R^A_{~B}}\right) \notag\\ &+\delta \theta^A \wedge \epsilon_A + \delta \omega^A_{~B} \wedge \epsilon_{A}^{~B} +\delta\lambda^A_{~B}\wedge \frac{\partial \mathcal{L}}{\partial \lambda^A_{~B}}, \label{variation} \end{flalign} where we introduced symbolic names for the Euler-Lagrange variational expressions: \begin{equation}\label{EA} \epsilon_A := \frac{\partial \mathcal{L}}{\partial \theta^A} + D\frac{\partial \mathcal{L}}{\partial T^A}, ~~\text{and} \end{equation} \begin{equation} \epsilon_{AB} := \theta_{[B} \wedge \frac{\partial \mathcal{L}}{\partial T^{A]}} + D \frac{\partial \mathcal{L}}{\partial R^{AB}}.\label{EAB} \end{equation} Since $\omega^{AB}$ is antisymmetric $\epsilon_{AB}$ is also: $\epsilon_{AB}\equiv\epsilon_{[AB]}$. Let us consider a local frame gauge transformation $\delta \theta^A = l^A_{~B} \theta^B$, where $l^A_{~B}$, being an infinitesimal Lorentz transformation, is antisymmetric. We have consequently, $\delta \omega^A_{~B}=-D l^A_{~B}$. Since $\delta \mathcal{L}$ is a scalar under this transformation, we have, from Eq.(\ref{variation}), the following identity: \begin{flalign} 0 \equiv & ~d\left(l^A_{~B} \theta^B \wedge \frac{\partial \mathcal{L}}{\partial T^A} - Dl^A_{~B} \wedge \frac{\partial \mathcal{L}}{\partial R^A_{~B}} \right) + l^A_{~B} \theta^B \wedge \epsilon_A \notag\\&- Dl^A_{~B} \wedge \epsilon_A^{~B}+(l^A_{~C}\lambda^C_{~B}-l^C_{~B}\lambda^A_{~C})\wedge \frac{\partial \mathcal{L}}{\partial \lambda^A_{~B}}.\label{ident} \end{flalign} Since \begin{equation} Dl^A_{~B} \wedge \frac{\partial \mathcal{L}}{\partial R^A_{~B}} = -d\left(l^A_{~B} \frac{\partial\mathcal{L}}{\partial R^A_{~B}}\right) + l^A_{~B} D\frac{\partial \mathcal{L}}{\partial R^A_{~B}}, \end{equation} and $d^2 = 0$, we get from Eq.(\ref{EAB}) and Eq.(\ref{ident}) \begin{flalign} 0 \equiv &d\left(l^{AB} \epsilon_{AB}\right) + l^{AB} \theta_B \wedge \epsilon_A - Dl^{AB}\wedge \epsilon_{AB} \notag\\ &+l^{AB}\left[\lambda_{BC}\wedge \frac{\partial \mathcal{L}}{\partial \lambda^A_{~C}} -\lambda^C_{~A}\wedge\frac{\partial \mathcal{L}}{\partial \lambda^{CB}}\right]. \end{flalign} This yields the Noether differential identity: \begin{equation}\label{Noether} D\epsilon_{A B} + \theta_{[B}\wedge \epsilon_{A]} -2\lambda_{C[B}\wedge \frac{\partial \mathcal{L}}{\partial \lambda^{A]}_{~C}} \equiv 0, \end{equation} which does not depend on any of the field equations being satisfied. \section{Counting the Components} \label{dof} Now let us consider a special case, the teleparallel Lagrangian: \begin{equation}\label{La} \mathcal{L}_{\|}(\theta^A, T^A) + \lambda^A_{~B} \wedge R^B_{~A}. \end{equation} The concern is the following: do the field equations obtained from Eq.(\ref{La}) contain the \emph{same} amount of physical information --- no more and no less --- as the equations obtained from the coframe Lagrangian $\mathcal{L}_{\|}(\theta, d\theta)$, or equivalently the frame Lagrangian $\mathcal{L}_{\|}(e,\partial e)$? Note that the variation of the Lagrangian in Eq.(\ref{La}) involves variation with respect to the frame, the connection and the multiplier, whereas the coframe Lagrangian involves only variation with respect to the frame. From the first Lagrangian, the multiplier variation would enforce the vanishing of curvature, which leads to a preferred frame with a vanishing connection; then the frame variation reduces to that obtained from the pure frame Lagrangian. So the remaining technical issue is whether the equation obtained by variation with respect to the connection could have any ``physical'' content beyond determining the multiplier. To put it differently: does the connection or the multiplier contain any dynamics? As mentioned, the variation with respect to $\lambda^A_{~B}$ implies flatness $R^A_{~B}=0$. Then there exists a frame in which $D=d$, in which we no longer have local gauge freedom. However, while it is generally believed that imposing flatness via a Lagrange multiplier is equivalent to imposing flatness \emph{a priori}, it is not obvious how the counting of independent components works out to match so well, especially since in the Lagrange multiplier approach there are gauge degrees of freedom. This is what we shall elaborate on now. The argument is just as easy, and actually more clear, in $n$-dimensions. Then $R^A_{~B}$ is a 2-form while $\lambda^A_{~B}$ is an $(n-2)$-form. It is easy to see that the Lagrange multiplier has some gauge freedom. Consider the transformation \begin{equation} \lambda^A_{~B} \to \lambda^A_{~B} + D\chi^A_{~B}, \label{multi_gauge} \end{equation} where $\chi$ is an $(n-3)$-form. Under such a transformation the Lagrangian in Eq.(\ref{La}) picks up an additional term \begin{equation} D\chi^A_{~B} \wedge R^A_{~B} = d(\chi^A_{~B} \wedge R^B_{~A}) +(-1)^n \chi^A_{~B} \wedge DR^B_{~A}. \end{equation} By the Bianchi identity, $DR^B_{~A}=0$. Therefore only a total derivative term is added to the Lagrangian in Eq.(\ref{La}), and thus the equations of motion are invariant under this gauge transformation. In other words, we have a gauge freedom that does not allow us to determine $\lambda^{AB}$ completely, but only up to total differential terms. From Eq.(\ref{EAB}) and Eq.(\ref{La}) we find the explicit form for the expression obtained by variation of the connection one-form: \begin{equation}\label{eab} \epsilon_{AB} = \theta_{[B} \wedge \frac{\partial \mathcal{L}}{\partial T^{A]}} - D\lambda_{AB}=0. \end{equation} This is the only dynamical equation that contains the Lagrange multiplier, and it indeed is invariant under the multiplier gauge transformation~(\ref{multi_gauge}) since, schematically, $D^2 \chi \sim R\wedge \chi = 0$. Our aim is to show that relation Eq.(\ref{eab}) serves \emph{only} to determine the multiplier (as much as it can be determined), and that it \emph{has no other extra dynamical content} independent of (\ref{EA}). Let us keep track of the number of independent components. Let $n$ be the spacetime dimension, and $N=\binom{n}{2}=n(n-1)/2$ the dimension of the orthonormal frame gauge group ${\text{SO}}(1, n-1)$.\footnote{Here we are considering the metric compatible case using orthonormal frames. In other teleparallel theories for the frame gauge group ${\text{GL}}(n)$ one would have $N=n^2$ and for ${\text{SL}}(n)$ $N=n^2-1$.} The number of independent components of the connection 1-form is $Nn$, that of $R^A_{~B}$ and $\lambda^A_{~B}$ is $Nn(n-1)/2$, and that of $\epsilon^{AB}$ is $Nn$. Finally, the multiplier gauge freedom $D\chi^{A}_{~B}$ has $N(n-1)(n-2)/2$ independent components.\footnote{According to the Hodge-Kodaira-de Rham generalization of the Helmholtz decomposition (see, e.g.,~\cite{AP2, Frankel}), locally a differential form can be decomposed into a sum of terms which are in the kernel and the co-kernel of the differential operator $d$, and can be expressed as the differential and codifferential of certain potentials. For a $k$-form in $n$-dimensions, the sizes of these terms are determined by the binomial coefficients $\binom{n}{k}=\binom{n-1}{k-1}+\binom{n-1}{k}$.} Thus the field equations can determine of $\lambda$ \begin{equation} \frac{Nn(n-1)}{2} - \frac{N(n-1)(n-2)}{2} = N(n-1) \end{equation} components. This is the total number of multipliers minus their inherent gauge freedom. It is effectively the number of components of Eq.(\ref{eab}) that serve the purpose of determining the Lagrange multiplier value. Since we are not actually interested in the values of the multipliers, \emph{this is the content of Eq.(\ref{eab}) that can be neglected}. There are thus $Nn-N(n-1) = N$ components of Eq.(\ref{eab}) that can contain ``physical information'', since they are not involved in determining the multipliers. However exactly this many components are \emph{automatically} satisfied by virtue of the Noether identity in Eq.(\ref{Noether}), the teleparallel condition imposed by the multiplier and the frame dynamical equation~(\ref{EA}). Indeed, we observe that in the Noether identity it is not $\epsilon^{AB}$ but $D \epsilon^{AB}$ which actually appears. Due to the differential operator $D$, the identity contains, schematically, $D^2 \lambda \sim R \wedge \lambda \equiv 0$. That is, $D\epsilon^{AB}$ contains the part of $\epsilon^{AB}$ which is entirely independent of $\lambda^{AB}$. This is an indication that the part of Eq.(\ref{eab}) that is independent of $\lambda^{AB}$ is automatically a consequence of the frame dynamical equation (\ref{EA}) and the identity Eq.(\ref{Noether}) --- \emph{it has no independent information}. Let us say this in another way. Here are 4 physically equivalent sets of effective dynamical equations: \begin{flalign} \omega^{AB}=0,& \quad E^{AB}=0, \\ R^{AB}=0,& \quad E^{AB}=0, \\ R^{AB}=0,& \quad E^{(AB)}=0, \quad D\epsilon^{AB}=0, \\ R^{AB}=0,& \quad E^{(AB)}=0, \quad \epsilon^{AB}=0, \end{flalign} where the frame dynamical equation has been written as a 4-form \begin{equation}E^{AB}:=\theta^A\wedge\epsilon^B.\end{equation} {Here $E^{(AB)}$ denotes its symmetric part and $E^{[AB]}$ its antisymmetric part.} Effectively, $E^{[AB]},~D\epsilon^{AB}$ and $\epsilon^{AB}$ (\ref{eab}) contain equivalent physical information. The key is the Noether differential identity, Eq.(\ref{Noether}), which guarantees that \begin{equation} E^{[AB]}=0 \ \Longleftrightarrow \ D\epsilon^{AB}=0. \end{equation} In the frame with vanishing connection the second of these equations says that $\epsilon^{AB}$ is closed, then (at least locally) it is exact---which thus means one can find a multiplier in (\ref{eab}) that makes $\epsilon^{AB}$ vanish. Thus the Lagrange multiplier approach yields the same number of independent components as the usual approach in which the curvature-free condition is imposed \emph{a priori}. There still remains a \emph{slight} possibility that the first term on the right hand side of Eq.(\ref{eab}) might, in $n$-dimensions, contain a closed but not exact $(n-1)$-form. Then it might include an extra global condition for the connection-multiplier representation that is not required in the coframe version. \emph{To us this seems unlikely, but we have not yet been able to rule it out for spaces that have a non-vanishing $(n-1)$-cohomology}\footnote{Future works considering explicit examples of 4-dimensional spacetimes with nontrivial 3-cohomology might shed some light on this issue. We propose to study class A Bianchi models (types I, II, VIII, IX), which can all be compactified. In particular, Bianchi type I model can have a 3-torus topology, and type IX can have an $S^3$ topology. Both of these spacetimes have spatial volume 3-forms that are closed but not exact.}. Thus \emph{generically} a teleparallel theory has effectively $n^2$ physical dynamical equations $0=E^{AB}=E^{(AB)}+E^{[AB]}$. Only for the special case of the teleparallel equivalent of GR the anti-symmetric part vanishes identically: $E^{[AB]}\equiv0$, leaving $n(n+1)/2$ dynamical equations. It is important to emphasize at this point that, for TEGR, in the connection-multiplier representation there are \emph{two} local Lorentz symmetries: \begin{itemize} \item[(1)] Transforming the frame along with the standard induced connection transformation leaves the action invariant. \item[(2)] Transforming the frame while keeping the connection fixed changes the action by a total differential. \end{itemize} Transformation (1) applies to all teleparallel theories, whereas (2) is obviously is no longer true in the case of a general teleparallel theory, such as $f(T)$ gravity. \section{Conclusion} One major advantage of the Lagrange multiplier formulation is that it permits us to use \emph{any} orthonormal frame that corresponds to a metric, since it manifestly preserves local Lorentz invariance. This avoids the important and practical problem of identifying the correct frame compatible with the zero-connection in the usual approach. Although it has long been argued that this approach is equivalent to the usual frame approach which sets the connection to zero \emph{a priori}, we found that there are some subtleties in the counting of the number of components in the Lagrange multiplier approach, which until now have not been fully discussed in detail. In this work we showed that indeed the number of physically significant components for the equations in the Lagrange multiplier formulation agrees with that obtained using the frame approach. Consequently, a manifestly local Lorentz invariant $f(T)$ theory cannot be expected to be free of the pathologies, which were previously found to plague $f(T)$ gravity formulated in the usual pure frame approach. Nevertheless, the Lagrange multiplier teleparallel formulation might shed some light on the properties of the extra degrees of freedom and the ``remnant symmetry'' discovered in~\cite{1412.3424} (which was further discussed in~\cite{1412.8383}). \section*{Acknowledgement} YCO acknowledges the support from National Natural Science Foundation of China (No.11705162), and Natural Science Foundation of Jiangsu Province (No.BK20170479). The authors thank Manos Saridakis, Martin Kr\v{s}\v{s}\'ak, and Huan-Hsin Tseng for related discussions. YCO also thanks Martin Kr\v{s}\v{s}\'ak for his hospitality during the ``Geometric Foundations of Gravity'' conference in Tartu, Estonia, during which this work was finalized. He acknowledges the China Postdoctoral Science Foundation (grant No.17Z102060070), which supported this travel.
2012.11255
\section{Introduction} The space of states lies at the heart of the kinematic information about a quantum system. Even in the finite dimensional case we are far from fully understanding its mathematical structures and their connections to the physics of the system. More so in infinite dimensions, i.e. in the case of quantum field theories. One essential feature of quantum states is entanglement. It plays a crucial role in quantum information theory and beyond that provides ways to characterise quantum fluctuations. For example, the entanglement of the ground state alone can help classifying quantum phases and tell us about possible topological structure \cite{Kitaev:2005dm,Li:2008aa,Jiang:2012aa} or whether a system is close to criticality \cite{Amico:2007ag}. Therefore measures of entanglement of quantum states play a crucial role in describing the structure of state spaces. Another standard way to understand these structures is the development of methods to compare different states. Quickly one comes to realize that even if the microscopic realization of two states is quite different their meso- or macroscopic features might be very similar. An immediate example are different energy eigenstates. One can also go the opposite way. Imagine two states with macroscopically very similar features, they e.g. share the same energy. How deep do we have to dig to see the difference in these states, or in other words how distinguishable are they? Mathematical measures of distinguishability can attach a lot of structure to the space of states. Ideally this structure has physical significance, i.e. it helps to explain physical phenomena. For instance, distinguishability measures help to put the Eigenstate Thermalization Hypothesis \cite{srednicki1996thermal,Deutsch:1991,rigol2008thermalization} on a more quantitative footing, and, as another example, they should govern the `indistinguishability' of black hole microstates in AdS \cite{Strominger:1996sh,Strominger:1997eq}. We here want to investigate some entanglement and distinguisability measures in the context of two dimensional conformal field theory. The latter are among the best understood and most studied quantum field theories, play a crucial role in the perturbative description of string theory and appear as fixed points of renormalization group flow such that they describe the dynamics of statistical and condensed matter systems at criticality. In some cases they can even be solved exactly \cite{Belavin:1984vu} and under certain conditions -- the case of rational theories with a finite number of primary operators -- all possible CFTs have been classified \cite{Cappelli:1986hf}. Their huge amount of symmetry allows to explicitly compute partition and correlation functions as well as their conformal transformation rules. It is not a coincidence that all the measures we will use can be computed by particularly transformed correlation functions. We put our focus on so-called descendant states -- states excited by Virasoro generators -- on a circle of length $L$. Then we consider subsystems of size $l < L$ onto which we reduce the pure states of the full system. How to compute entanglement for these kind of construction was shown in \cite{Palmai:2014jqa,Taddia:2016dbm}. We will use similar methods to also compute distinguishability measure for these reduced density matrices. As will become clear when we introduce the methods to compute the entanglement and distinguishability measures, it is in principle possible to compute algebraic expressions for any descendant, in particular for descendants of the vacuum. In practice, the algebraic expressions become cumbersome and are easier to tackle by computer algebra programs. We use Mathematica for our computations and explicitly display important parts of our code in the appendices. The notebooks with the remaining code are openly accessible. The heart of the code is a function that implements a recursive algorithm to compute generic correlators of descendants. In case of vacuum descendants it results in an analytic expression of the insertion points and the central charge of the theory. In case of descendants of arbitrary primary states the function returns a differential operator acting on the respective primary correlator. With this tool at hand, we are able to compute, for instance, the Sandwiched R\'enyi Divergence (SRD) and the Trace Squared Distance (TSD) which have not been computed for descendant states before. In case of the R\'enyi entropy we can expand on existing results. The outcomes for the SRD for example allow us to test a generalisation of the quantum null energy condition suggested in \cite{Lashkari:2018nsl}. Results that we compute for vacuum descendants are universal and, in particular, can be studied at large central charge, i.e. the regime where two dimensional conformal field theories may have a semi-classical gravitational dual in $AdS_3$. We will show results for vacuum descendant states in this limit. We will organise the paper as follows. In section \ref{sec:CFTtec} we review all the CFT techniques that we need later. In the following section \ref{sec:qmeasures} we discuss the quantum measures that we want to compute, namely the R\'enyi entanglement entropy as a measure of entanglement, and the sandwiched R\'enyi divergence and the trace square distance as measures of distinguishability between states reduced to a subsystem. In section \ref{sec:universal} we focus on results for descendants of the vacuum. These will apply to all theories with a unique vacuum and, hence, we call them universal. In particular these results can be computed explicitly up to rather high excitation. In the following section \ref{sec:nonuniversal} we show the results for descendants of generic primary states. These results depend on the primary correlators that are theory dependent and, hence, are non-universal. Therefore we compute results in two explicit models, namely the critical Ising model the three-state Potts model. \section{Review of some CFT techniques}\label{sec:CFTtec} \subsection{Notation and definitions} We want to introduce a notation for the states and fields appearing in our expressions. Consider the Virasoro representation $R_p$, whose primary state has conformal dimension $\Delta = h+ \bar{h}$, with the chiral and anti-chiral conformal weights $h,\bar{h}$, and is denoted by $\ket{\Delta}$. Chiral descendant states are written as $\ket{\Delta,\{(m_i,n_i)\}} = \prod_i L_{-m_i}^{n_i}\ket{\Delta}$, with the chiral copy of the Virasoro generators $L_m$. For anti-chiral descendants one simply uses the anti-chiral copy of the Virasoro algebra. Any state in $R_p$ can be written as a linear combination of the latter states. In two-dimensional CFT the operator-state correspondence holds, where the operators are local quantum fields on the space-time of the theory. For any state $\ket{s}$ we denote the respective field as $f_{\ket{s}}$. The primary field that corresponds to the primary state $\ket{\Delta}$ is then $f_{\ket{\Delta}}$. Descendant fields are given by \begin{equation} f_{ \ket{\Delta,\{(m_i,n_i)\}}} = \prod_i \hat{L}_{-m_i}^{n_i} f_{\ket{\Delta}}\,, \end{equation} where \begin{equation}\label{eq:desfield} \hat{L}_{-m} g(w) := \oint_{\gamma_w} \frac{dz}{2\pi i} \frac{1}{(z-w)^{m-1}} T(z) g(w) \end{equation} for any field $g$; $\gamma_w$ is a closed path surrounding $w$. $\hat{L}_{-m} g(w)$ is the $m$th `expansion coefficient' in the OPE of the energy momentum tensor $T$ with the field $g$. A field's dual is the field that corresponds to the dual vector. We denote the field dual to $f_{\ket{s}}(z,\bar{z})$ by \begin{equation} f_{\bra{s}}(\bar{z},z) := \left(f_{\ket{s}}(z,\bar{z})\right)^\dagger\,. \end{equation} \noindent Note that it is most naturally defined on the complex plane. The duality structure of the Hilbert space is fixed by the definitions $L_{-n}^\dagger = L_{n}$ and $\bra{\Delta}\Delta'\rangle = \delta_{\Delta,\Delta'}$. This structure needs to be recovered from the two point function of the respective fields when the two points coincide, i.e \begin{equation}\label{eq:contraint0} \bra{s}s'\rangle \equiv \lim_{z\to w}\left\langle f_{\bra{s}}(\bar{z},z) f_{\ket{s'}}(w,\bar{w}) \right\rangle\,. \end{equation} \noindent To achieve this one chooses radial quantization around the second insertion point $w$ and defines the dual field $f_{\bra{s}}(\bar{z},z)$ as the outcome of the transformation $G(z)=\frac{1}{z-w}+w$ of the field $f_{\ket{s}}(z,\bar{z})$ at the unit circle surrounding $w$. With the help of the transformation rules that we define in the following section \ref{sec:trafo} we can therefore write \begin{equation}\label{eq:DualFeld} f_{\bra{s}}(\bar{z},z) = f_{\Gamma_{G} \ket{s}}\left(\frac1{z-w}+w,\frac1{\bar{z}-\bar{w}}+\bar{w}\right)\,, \end{equation} where the action $\Gamma_G$ on the local Hilber space takes the simple form \begin{equation} \Gamma_G = \left(-\frac{1}{(z-w)^2}\right)^{L_0}\left(-\frac{1}{(\bar{z}-\bar{w})^2}\right)^{\bar{L}_0} \exp\left(\frac{L_1}{w-z}+\frac{\bar{L}_1}{\bar{w}-\bar{z}}\right)\,. \end{equation} \noindent In what follows we will use radial quantization around the origin of the complex plane, i.e. we will choose $w=0$. Note, that \eqref{eq:DualFeld} gives \eqref{eq:contraint0} up to a phase factor $(-1)^{S_p}$, where $S_p$ is the conformal spin of the primary state $\ket{s}$ is built from. \subsection{Transformation of states and fields} \label{sec:trafo} The transformation rule for arbitrary chiral fields was first presented in \cite{Gaberdiel:1994fs}. We will, however, use the (equivalent) method introduced in \cite{frenkel2004vertex} (section 6.3). There is a natural action $M(G)$ of a conformal transformation $G$ on any Virasoro module and, hence, on the full space of states. For a field $f_{\ket{s}}(w)$ we need to know how the transformation acts locally around $w$ and transform the field accordingly. It works as follows: Consider a conformal transformation $G$ and choose local coordinates around the insertion point $w$ and the point $G(w)$. The induced local coordinate change can be written as $\mathfrak{g}(z) = \sum_{k=1}^\infty a_k z^k$, where $z$ are the local coordinates around $w$ that are mapped to the local coordinates $\mathfrak{g}(z)$ around $G(w)$. Now solve the equation \begin{equation} v_0 \exp\left(\sum_{j=1}^\infty v_j t^{j+1}\partial_t\right)t = \mathfrak{g}(t) \end{equation} for the coefficients $v_j$ order by order in $t$. The local action of $G$ on the module is then given by $M(G) := \exp\left(-\sum_{j=1}^\infty v_j L_j\right) v_0^{-L_0}$. The inverse, that we will rather use, is then given by \begin{equation} \Gamma := M(g)^{-1} = v_0^{L_0} \exp\left(\sum_{j=1}^\infty v_j L_j\right)\,, \end{equation} such that we can write \begin{equation} f_{\ket{s}}(G(w)) = f_{\ket{s'} = \Gamma \ket{s}}(w)\,. \end{equation} \noindent Note that for a descendant at level $k$ we only need the coefficients $v_j$ up to $j=k$. A Mathematica code to obtain the relation between the coefficients $v_j$ and $a_k$ is given in appendix~\ref{app:matv}. \subsection{Computing correlation functions of descendant fields on the plane} We will be interested in computing correlation functions \begin{equation} \langle \prod_{i=1}^N f_{\ket{s_i}}(z_i) \rangle \, , \end{equation} where $\ket{s_i}$ are some descendant states. To get a handle on them we use Ward identities in a particular way. Therefore, consider a meromorphic function $\rho(z)$ that has singularities at most at $z\in \left\{z_i\right\}\cup \{0,\infty\} $, i.e. at the insertion points and at the singular points of the energy momentum tensor. Let us make the particular choice \begin{equation} \rho(z) = \prod_{i=1}^N (z-z_i)^{a_i} \end{equation} for $a_i\in\mathbb{Z}$, which is in particular regular at $0$. Now, consider the integral identity \begin{equation} \sum_{i=1}^N \oint_{\gamma_{z_i}} \frac{dz}{2\pi i} \rho(z) \left\langle T(z) g_i(z_i) \prod_{j\neq i} g_j(z_j) \right\rangle = - \oint_{\gamma_\infty} \frac{dz}{2\pi i} \rho(z) \left\langle T(z) \prod_{j=1}^N g_j(z_j) \right\rangle\,, \end{equation} where $g_j$ are arbitrary fields, e.g. descendant fields. The latter identity simply follows from deforming the integral contour accordingly. The r.h.s. vanishes for $\sum_{i=1}^N a_i \le2$. Next, we consider the functions \begin{equation} \rho_i(z) := \prod_{j\neq i} (z-z_j)^{a_j} = \frac{\rho(z)}{(z-z_i)^{a_i}} \end{equation} for which we need the expansion around $z_i$, \begin{equation} \rho_i(z) \equiv \sum_{n=0}^\infty \rho_i^{(n)} \, (z-z_i)^n\,. \end{equation} \noindent Note, that the expansion coefficients $\rho_i^{(n)}$ are some rational expressions that depend on all $z_j\neq z_i$ and $a_j$. Now, using the definition of $\hat{L}_m$, \eqref{eq:desfield}, and the latter expansion we obtain \begin{equation} \sum_{i=1}^N \sum_{n=0}^\infty \rho_i^{(n)} \left\langle\left(\hat{L}_{a_i+n-1}g_i(z_i)\right) \prod_{j\neq i} g_j(z_j)\right\rangle = 0\,\label{eq:WardIdNpt} \end{equation} for $\sum a_i\leq 2$. Note that, even if not written explicitly, the sums over $n$ do always terminate for descendant fields $g_i$. Note further that these relations among correlation functions depend on the choice of $a_i$ but the correlators that can be computed from these relations are unique. \subsubsection{Example for particular choices and explicit recursive formula} One very immediate choice is $a_i = 1-m$ and $a_{j\neq i}=0$ which gives the relation \begin{align} \left\langle \left(\hat{L}_{-m}g_i(z_i)\right) \prod_{j\neq i}g_j(z_j)\right\rangle = - \sum_{j\neq i} \sum_{n=0}^{\text{lvl}(g_j)+1} \rho_j^{(n)} \left\langle \left(\hat{L}_{n-1} g_j(z_j)\right) \prod_{k\neq j}g_k(z_k) \right\rangle \label{eq:rec1} \end{align} with \begin{equation} \rho_j^{(n)} = (-1)^{n}\binom{n+m-2}{n} (z_j-z_i)^{1-m-n}\,. \end{equation} \noindent For $m>1$ we see that the total level of each correlator on the r.h.s., i.e. the sum over all levels of fields appearing in the correlation functions, is lower than the one on the l.h.s. We, hence, can express correlation functions of higher total level by correlators of lower total level. One way of computing correlation functions of descendants is using the above formula recursively until there are only $L_{-1}$ left. These simply act as derivative operators on the respective primary. The Mathematica code that uses above equation recursively and computes arbitrary correlation functions of vacuum descendants is given in appendix \ref{app:VacDesCorr}. It produces an algebraic expression of the insertion points and the central charge $c$. The Mathematica code to compute correlation function for descendants of generic primary fields is given in appendix \ref{app:PrimDesCorr}. It produces a derivative operator that acts on the respective primary correlator, which in general is theory dependent. \section{Review of some quantum measures in CFT}\label{sec:qmeasures} We want to consider an isolated quantum system living on a circle of length $L$ whose (low-energy) physics is governed by a (1+1)-dimensional effective field theory. At some critical value of its couplings the theory becomes conformal. This is what we want to assume. Then, the system is in some pure state of a (1+1)d CFT, associated with a density matrix $\rho = \ket{s}\bra{s}$. Let us further consider a spatial bipartition into a region $A$ of size $l<L$ and its complement $\overline{A}$. Assume a situation where one has no access to the complement, i.e. all measurements are restricted to the subregion $A$. Our ignorance of the complement means that the state in the region we have access to can be reduced to the density matrix \begin{equation} \rho_A = \text{Tr}_{\overline{A}}\rho\,, \end{equation} where $\text{Tr}_{\overline{A}}$ is the partial trace over the degrees of freedom of the complement. In fact, a physically realistic CFT observer can only access a restricted amount of information by measurements which in the present case is modeled by restricting the measurement to a spatial region $A$. Our focus of interest lies in reduced density matrices that originate from descendant states of the full system. We, in particular, want to study their entanglement and measures of distinguishability between them. \subsection{Entanglement measure: R\'enyi entropy} \label{sec:Renyi} The $n$th R\'enyi entropy \cite{renyi2012probability,nielsen_chuang_2010} is defined as \begin{equation} S_n(A) = \frac{1}{1-n} \log \text{Tr}_A \rho_A^n\,. \end{equation} \noindent For $n\to 1$ it converges to the (von Neumann) entanglement entropy $S(A) = -\text{Tr} \rho_A \log\rho_A$ which is the most common entanglement measure \cite{nielsen_chuang_2010}. However, in particular in field theories, there exist alluring analytical tools that make it much easier to compute R\'enyi entropies for $n>1$ than the entanglement entropy. Additionally, many key properties of the entanglement entropy, such as the proportionality of ground state entanglement to the central charge in critical systems and the area law of gapped states, hold for R\'enyi entropies too. In principle, the knowledge of the R\'enyi entropy for all $n\in \mathbb{N}$ allows to determine all eigenvalues of the reduced density matrix $\rho_A$. In the present case, the full system can be described by a CFT on the Euclidean space-time manifold of an infinite cylinder for which we choose complex coordinates $u = x + i \tau$ with $\tau\in\mathbb{R}$ and $x + L \equiv x \in \left(-\frac L2 ,\frac L2\right]$. The variable $\tau$ is regarded as the time coordinate and $x$ is the spatial coordinate. As subsystem $A$ we choose the spatial interval $\left(-\frac{l}2,\frac{l}{2}\right)$\,. In 2d CFT, the trace over the $n$th power of the reduced density matrix $\rho_A = \text{Tr}_{\overline{A}}\ket{s}\bra{s}$ is equivalent to a $2n$-point function on the so-called \textit{replica manifold} which is given by $n$ copies of the cylinder glued together cyclically across branch cuts along the subsystem $A$ at $\tau =0$ \cite{Holzhey:1994we,Calabrese:2009qy}. The exponential map $z(u) = \exp\left(2\pi i u/L\right)$ maps the latter manifold to the $n$-sheeted plane $\Sigma_n$, where the branch cut now extends between $\exp\left(\pm i \pi \frac{l}{L}\right)$\,. The $2n$ fields are those that correspond to the state $\ket{s}$ and its dual $\bra{s}$, where one of each is inserted at the origin of each sheet: \begin{align} \text{Tr}_A \rho_A^n &= \mathcal{N}_n \left\langle \prod_{k=1}^n f_{\bra{s}}(0_k)f_{\ket{s}}(0_k)\right\rangle_{\Sigma_n}\\ &= \mathcal{N}_n \left\langle \prod_{k=1}^n f_{\Gamma_{-1/z}\ket{s}}(\infty_k)f_{\ket{s}}(0_k)\right\rangle_{\Sigma_n}\,. \end{align} \noindent The constant $\mathcal{N}_n = Z(\Sigma_n)/Z(\mathbb{C})^n = \left(\frac{L}{\pi a} \sin\left(\frac{\pi l}{L}\right)\right)^{\frac{c}{3} \left(n-\frac{1}{n}\right)}$, $Z$ being the partition function on the respective manifold, ensures the normalization $\text{Tr}_A \rho_A =1$, with some UV regulator $a$ (for example some lattice spacing). In the second line we use the definition of the dual state. One way to compute the above correlation function is to use a uniformization map from $\Sigma_n$ to the complex plane. It is given by composing a Möbius transformation with the $n$th root, \begin{equation}\label{eq:uniformization} w(z) = \left(\frac{z e^{ -i\pi \frac{l}{L}} - 1}{z - e^{-i\pi\frac{l}{L}} }\right)^{\frac1n}\,. \end{equation} \noindent The $2n$ fields are mapped to the insertion points \begin{align} w(0_k) &= \exp\left(\frac{i \pi l}{ n L}+\frac{2\pi i(k-1)}{n} \right)\label{eq:InsPoints}\\ w(\infty_k) &= \exp\left(-\frac{i \pi l}{ n L}+\frac{2\pi i (k-1)}{n}\right)\nonumber \end{align} on the unite circle, and the fields have to transform as described in section \ref{sec:trafo}. The change of local coordinates is given in \ref{app:uniformization}. The local action is denoted by $\Gamma_{w(z)} \equiv \Gamma_{k,l}$ and for the dual fields we get $\Gamma_{w(1/z)} = \Gamma_{w(z)} \Gamma_{1/z} \equiv \Gamma_{k,-l}$. Putting all together we see that computing the $n$th R\'enyi entropy is basically equivalent to computing a $2n$ point function of particularly transformed fields: \begin{align}\label{eq:RFE} e^{(1-n)S_n(A)} = \text{Tr}_A \rho_A^n \equiv \mathcal{N}_n \left\langle \prod_{k=1}^n f_{\Gamma_{k,l} \ket{s}}\left(w(0_k)\right)f_{\Gamma_{k,-l} \ket{s}}\left(w(\infty_k)\right) \right\rangle_{\mathbb{C}}=: \mathcal{N}_n F_{\ket{s}}^{(n)} \,. \end{align} \noindent See also \cite{Palmai:2014jqa,Taddia:2016dbm} for derivations of the latter formula. Other computations of the entanglement entropy of excited states (not necessarily descendants) can also be found in \cite{Alcaraz:2011tn,Berganza:2011mh,Mosaffa:2012mz,Bhattacharya:2012mi,Taddia_2013,Caputa:2014vaa,Asplund:2014coa,Nozaki:2014uaa,Caputa:2014eta,Zhang:2020ouz,Zhang:2020txb}. \subsection{Distance measures} Distance and other similarity measures between density matrices provide quantitative methods to evaluate how distinguishable they are, where distinguishability in particular refers to the outcome of generic measurements in the different states. There is not a single best measure and not even agreement upon criteria to evaluate different distance measures. Most of them are designed such that they provide the space of (not necessarily pure) states with some additional structure that ideally allows to draw some physically relevant conclusions about the system under consideration. In case of reduced density matrices distance measures quantify how distinguishable they are by measurements confined to the subregion~$A$. We want to consider two of these measurements for reduced density matrices in two dimensional CFT. Let us denote the reduced density matrices as $\rho_i = \text{Tr}_{\overline{A}} \ket{s_i}\bra{s_i}$, with $\rho_0 \equiv \text{Tr}_{\overline{A}} \ket{0}\bra{0}$ the reduce density matrix of the vacuum. \subsubsection{Relative entropy} The relative entropy between two reduced density matrices $\rho_{1}$ and $\rho_{2}$ is given by \begin{equation}\label{eq:RelEntropy} S(\rho_{1},\rho_{2}) = \text{Tr} ( \rho_{1} \log \rho_{1}) - \text{Tr} ( \rho_{1} \log \rho_{2}) \,. \end{equation} \noindent It is free from UV divergencies, positive definite and one of the most commonly used distance measures in quantum information, in particular because several other important quantum information quantities are special cases of it, e.g. the quantum mutual information and quantum conditional entropy. The relative entropy also shows to be useful in high energy application when e.g. coupling theories to (semiclassical) gravity. It allows a precise formulation of the Bekenstein bound \cite{Casini_2008}, a proof of the generalized second law \cite{Wall_2010,Wall:2011hj} and the quantum Bousso bound \cite{Bousso:2014sda,Bousso:2014uxa}. It also appears in the context of holography where it can be used to formulate important bulk energy conditions (see e.g. \cite{Lin:2014hva,Lashkari:2014kda,Lashkari:2015hha}). However, as in the case of the entanglement entropy there exist no direct analytical tools to compute the relative entropy in generic two-dimensional conformal field theory. There exist several R\'enyi type generalisations (see e.g. \cite{Lashkari:2014yva,Lashkari:2015dia}) that are more straight forward to compute. We here want to focus on a quite common one called the Sandwiched R\'enyi Divergence. \iffalse Again, the replica method comes to help. It is possible to define the limiting process \cite{Lashkari:2014yva,Lashkari:2015dia} \begin{equation} S(\rho_{1},\rho_{2}) = \lim_{n\to1} S_n (\rho_{1},\rho_{2})\,, \end{equation} with the R\'enyi type quantity \begin{align}\label{eq:RRE} S_n (\rho_{1},\rho_{2}) &= \frac{1}{1-n} \log \frac{\text{Tr}\rho_{1}^n}{\text{Tr}\rho_{1} \rho_{2}^{n-1}} \\ &= \frac{1}{1-n} \log \frac{\left\langle \prod\limits_{k=1}^n f_{\bra{s_1}}(0_k)f_{\ket{s_1}}(0_k)\right\rangle_{\Sigma_n}}{\left\langle f_{\bra{s_1}}(0_1)f_{\ket{s_1}}(0_1)\prod\limits_{k=2}^n f_{\bra{s_2}}(0_k)f_{\ket{s_2}}(0_k)\right\rangle_{\Sigma_n}}\,, \end{align} where the second line follows by the same logic as in the case of the R\'enyi entropy. Using the uniformization map \eqref{eq:uniformization} we can express the R\'enyi relative entropy in terms of $2n$-point correlation functions on the plane, \begin{align} e^{(1-n) S_n(\rho_{1},\rho_{2})} = \frac{\left\langle \prod\limits_{k=1}^n f_{\Gamma_{k,l} \ket{s_1}}\left(w(0_k)\right)f_{\Gamma_{k,-l} \ket{s_1}}\left(w(\infty_k)\right) \right\rangle_{\mathbb{C}}}{\left\langle f_{\Gamma_{1,l} \ket{s_1}}\!\left(w(0_1)\right)f_{\Gamma_{1,-l} \ket{s_1}}\!\left(w(\infty_1)\right)\prod\limits_{k=2}^n f_{\Gamma_{k,l} \ket{s_2}}\!\left(w(0_k)\right)f_{\Gamma_{k,-l} \ket{s_2}}\!\left(w(\infty_k)\right) \right\rangle_{\mathbb{C}}} \end{align} with the insertion points as given in $\eqref{eq:InsPoints}$. \fi \subsubsection*{Sandwiched R\'enyi divergence}\label{sec:SRD} The Sandwiched R\'enyi Divergence (SRD) between two density matrices $\rho_1$ and $\rho_2$ is given by \begin{equation} \mathcal{S}_n(\rho_{1},\rho_{2}) = \frac{1}{n-1} \log \text{Tr} \left(\rho_1^{\frac{1-n}{2n}} \rho_2 \rho_1^{\frac{1-n}{2n}}\right)^n\,.\label{eq:SRD} \end{equation} \noindent It is a possible one-parameter generalization of the relative entropy \eqref{eq:RelEntropy}, with the parameter $n \in[\frac12,\infty)$ and $S(\rho_{1},\rho_{2}) \equiv \mathcal{S}_{n\to1} (\rho_{1},\rho_{2})$\,. The SRD by itself has been shown to enjoy important properties of a measure of distinguishability of quantum states. It is, in particular, positive for all states, unitarily invariant, and decreases under tracing out degrees of freedom \cite{Mueller-Lennert:2013,Wilde:2014eda,Frank_2013,Beigi_2013}. \begin{figure}[t] \centering \definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm,scale=1] \begin{scope}[local bounding box = cylinder] \draw [rotate around={178.65449714894862:(1,3)},line width=1pt] (1,3) ellipse (1.0008289996875666cm and 0.4998965358776981cm); \draw [line width=1pt] (2,3)-- (2,0); \draw [line width=1pt] (2,0) arc(0:-180:1cm and 0.5cm); \draw [dashed, line width=1pt] (2,0) arc(0:180:1cm and 0.5cm); \draw [line width=1pt] (2,1.5) arc(0:-180:1cm and 0.5cm); \draw [dashed, line width=1pt] (2,1.5) arc(0:180:1cm and 0.5cm); \draw [line width=1pt] (0,3)-- (0,0); \draw [fill=ududff] (0.5040500631758853,1.068897766899791) circle (2pt) node[anchor=south,color=ududff] {$-\frac{l}{2}$}; \draw [fill=ududff] (1.5,1.07) circle (2pt) node[anchor=south,color=ududff] {$\frac{l}{2}$}; \end{scope} \begin{scope}[local bounding box = plane, shift={(5,1.5)}] \draw[decoration = {zigzag,segment length = 0.5mm, amplitude = 0.2mm},decorate,line width=0.25pt] (0.75,0.18) arc(49.03:79.29:1.1cm); \draw [line width=1pt] (-2,-1)-- (-1,1) \draw [line width=1pt] (-1,1)-- (1,1) \draw [line width=1pt] (1,1)-- (2,-1) \draw [line width=1pt] (-2,-1)-- (2,-1) \draw [color=red,thick] (0,0) -- ++(-2pt,-2pt) -- ++(3pt,3pt) ++(-3pt,0) -- ++(3pt,-3pt); \draw [color=red,thick] (0.9,0.8133333333333327) -- ++(-2pt,-2pt) -- ++(3pt,3pt) ++(-3pt,0) -- ++(3pt,-3pt); \end{scope} \draw[shorten >=2mm,shorten <=2mm, thick,-latex] (cylinder) -- node[above]{(i)} (plane); \begin{scope}[local bounding box = moebius, shift={(5,-2)}] \draw [line width=1pt] (-2,-1)-- (-1,1) \draw [line width=1pt] (-1,1)-- (1,1) \draw [line width=1pt] (1,1)-- (2,-1) \draw [line width=1pt] (-2,-1)-- (2,-1) \draw[decoration = {zigzag,segment length = 0.5mm, amplitude = 0.2mm},decorate,line width=0.25pt] (0,0) -- (-1.5,0); \node at (0.8,-0.34) (a) {}; \node at (0.8,0.34) (aa) {}; \draw [color=red,thick] (0.8,-0.34) -- ++(-2pt,-2pt) -- ++(3pt,3pt) ++(-3pt,0) -- ++(3pt,-3pt) node[anchor=north, color=red] {$e^{-\frac{i \pi l}{L}}$}; \draw[dashed, line width=0.25pt] (0,0) -- (a); \draw [color=red,thick] (0.8,0.34) -- ++(-2pt,-2pt) -- ++(3pt,3pt) ++(-3pt,0) -- ++(3pt,-3pt) node[anchor=south, color=red] {$e^{\frac{i \pi l}{L}}$}; \draw[dashed, line width=0.25pt] (0,0) -- (aa); \end{scope} \draw[shorten >=2mm,shorten <=2mm, thick,-latex] (plane) --node[right]{(ii)} (moebius); \begin{scope}[local bounding box = moebius_negpow, shift={(0,-2)}] \draw [line width=1pt] (0.5,1)-- (1,1); \draw [line width=1pt] (1,1)-- (2,-1); \draw [line width=1pt] (0.75,-1)-- (2,-1); \draw[decoration = {zigzag,segment length = 0.8mm, amplitude = 0.2mm},decorate,line width=1pt] (0,0) -- (0.5,1); \draw[decoration = {zigzag,segment length = 0.8mm, amplitude = 0.2mm},decorate,line width=1pt] (0,0) -- (0.75,-1); \node at (0.8,-0.34) (a) {}; \node at (0.8,0.34) (aa) {}; \draw [color=red,thick] (0.8,-0.34) -- ++(-2pt,-2pt) -- ++(3pt,3pt) ++(-3pt,0) -- ++(3pt,-3pt); \draw[dashed, line width=0.25pt] (0,0) -- (a); \draw [color=red,thick] (0.8,0.34) -- ++(-2pt,-2pt) -- ++(3pt,3pt) ++(-3pt,0) -- ++(3pt,-3pt); \draw[dashed, line width=0.25pt] (0,0) -- (aa); \begin{scope} \clip (0.17,-0.25) rectangle (0.5,0.25); \draw[line width=0.25pt] (0,0) ellipse (0.45cm and 0.25cm); \end{scope} \node[anchor=west] at (0.45,0) {$\frac{2\pi}{n}$}; \end{scope} \draw[shorten >=2mm,shorten <=2mm, thick, -latex] (moebius) -- node[above]{(iii)} (moebius_negpow); \end{tikzpicture} \caption{Pictorial representation of the geometric setting for the SRD. (i) The reduced density matrix is represented by the sheet with respective operator insertions (red crosses) at 0 and $\infty$. (ii) A M\"obius transformation maps the insertion points to $e^{\pm \frac{i\pi l}{L}}$ and the branch cut to the negative real line. (iii) The multiplication by negative fractional powers of the reduced vacuum states is given cutting out respective parts of the sheet.} \label{fig:SRD} \end{figure} In particular due to the negative fractional power of $\rho_1$, there is no general method known to compute the SRD for arbitrary states in CFT. However, if $\rho_1$ is the reduced density matrix of the theory's vacuum then there is a technique introduced in \cite{Lashkari:2018nsl} to express it in terms of correlation functions. Let us remind that the reduced density matrix for a sub-system on the cylinder is represented by a sheet of the complex plane with a brunch cut along some fraction of the unit circle with the respective operator insertions at the origin and at infinity of that sheet. In case of the vacuum the corresponding operator is the identity and, hence, we regard it as no operator insertion. Multiplication of reduced density matrices is represented by gluing them along the branch cut. Now, let us consider the M\"obius transformation \begin{equation}\label{eq:Moebius} w(z) = \frac{z e^{ -i\pi \frac{l}{L}} - 1}{z- e^{-i\pi\frac{l}{L}} }\,, \end{equation} which in particular maps the two insertions points $0$ and $\infty$ of a sheet to $e^{\pm \frac{i\pi l}{L} }$ and the cut to the negative real axis on every sheet. Now, the reduced density operators can be regarded as operators acting on states defined on the negative real axis by rotating them by $2\pi$ and exciting them by locally acting with the respective operators at $e^{\pm \frac{i\pi l}{L} }$. In case of the vacuum reduced density matrix this now allows to define fractional powers by rotating by a fractional angle and even negative powers by rotating by \textit{negative} angles which basically means removing a portion of the previous sheet. The latter is, however, only possible if no operator insertion is removed. In the present case, the negative power $\frac{1-n}{2n}$ corresponds to an angle $-\pi + \frac{\pi}{n}$. Hence, this construction only makes sense for $\frac{l}{L}< \frac{1}{n}$.\footnote{In \cite{Moosa:2020jwt} the interested reader can find arguments why this is not simply an artifact of the CFT construction but holds generally when one assumes that the state is prepared from a Euclidean path integral.} If this requirement holds then $\rho_0^{\frac{1-n}{2n}} \rho_2 \rho_0^{\frac{1-n}{2n}}$ can be interpreted as a part of the complex plane between angles~$\pm \frac{\pi}{n}$ with operator insertions at angles $\pm\frac{\pi l}{L}$. This procedure is pictorially presented in figure \ref{fig:SRD}. Finally, taking the cyclic trace of $n$ copies of it means gluing $n$ of these regions onto each other which results in a $2n$ point function on the complex plane: \begin{align} \mathcal{F}^{(n)}_{\ket{s}} := \text{Tr}\left(\rho_0^{\frac{1-n}{2n}} \rho_2 \rho_0^{\frac{1-n}{2n}}\right)^n = \left\langle \prod\limits_{k=0}^{n-1} f_{\Gamma_{k,l} \ket{s}}\left(e^{\frac{i\pi l}{L} + \frac{2\pi i k}{n}}\right)f_{\Gamma_{k,-l} \ket{s}}\left(e^{-\frac{i\pi l}{L} + \frac{2\pi i k}{n}}\right) \right\rangle_\mathbb{C} \label{eq:SRDcorr} \end{align} where, in contrast to the previous and following section, $\Gamma_{k,l}$ is the local action of the above M\"obius transformation $w(z)$ followed by a rotation $e^{\frac{2\pi i k}{n}}$ to obtain the correct gluing. As before, for the dual field one has to consider $w(1/z)$ which is done by replacing $l\to-l$\,. We, here, want to take the opportunity to give an explicit example of the connection between rather formal definitions of distinguishability measures and physical features of a theory. The latter is the Quantum Null Energy Condition (QNEC) which follows from the so-called Quantum Focusing Conjecture \cite{Bousso:2015mna}. The QNEC gives a lower bound on the stress-energy tensor in a relativistic quantum field theory that depends on the second variation of entanglement of a subregion. The QNEC can also be formulated solely in terms of quantum information theoretical quantities and has been shown to be equivalent to positivity of the second variation of relative entropies \cite{Leichenauer:2018obf}. After the QNEC has been proven in free and holographic theories \cite{Bousso:2015wca,Koeller:2015qmn,Malik:2019dpg} it has since been shown to hold quite generally in the context of Tomita-Takesaki modular theory \cite{Balakrishnan:2017bjg,Ceyhan:2018zfg}. Recently a generalized version of QNEC has been suggested in \cite{Lashkari:2018nsl} and later proven to be true in free theories in dimensions larger than two \cite{Moosa:2020jwt}. This generalization may be called `R\'enyi Quantum Null Energy Condition' and is formulated as the positivity of the second variation of sandwiched R\'enyi entropies. The diagonal part of the second variation is simply given by the second derivative of the SRD with respect to the subsystem size. Hence, the R\'enyi Quantum Null Energy Condition can only be true in a theory if any SRD is a convex function of the subsystem size. We will explicitly check if this is true in our results. \subsubsection{Trace square distance} The Trace Square Distance (TSD) between two reduced density matrices is given by \begin{equation} T^{(2)}(\rho_{1},\rho_{2}) := \frac{\text{Tr}|\rho_{1} -\rho_{2}|^2}{\text{Tr} \rho_{0}^2} = \frac{\text{Tr} \rho_1^2 + \text{Tr} \rho_2^2 -2\text{Tr}\rho_1\rho_2}{\text{Tr} \rho_{0}^2}\,, \end{equation} where the factor $\text{Tr} \rho_{0}^2$ in particular removes any UV divergences and allows to directly express the trace square distance in terms of four-point functions on the two-sheeted surface $\Sigma_2$ (see also \cite{Sarosi:2016oks}), \begin{align} T^{(2)}(\rho_{1},\rho_{2}) \equiv\quad& \left\langle f_{\bra{1}}(0_1)f_{\ket{1}}(0_1)f_{\bra{1}}(0_2)f_{\ket{1}}(0_2)\right\rangle_{\Sigma_2} \\ +& \left\langle f_{\bra{2}}(0_1)f_{\ket{2}}(0_1)f_{\bra{2}}(0_2)f_{\ket{2}}(0_2)\right\rangle_{\Sigma_2} \nonumber\\ -& 2 \left\langle f_{\bra{1}}(0_1)f_{\ket{1}}(0_1)f_{\bra{2}}(0_2)f_{\ket{2}}(0_2)\right\rangle_{\Sigma_2}\,.\nonumber \end{align} \noindent Using the uniformization map \eqref{eq:uniformization} with $n=2$ we can express it in terms of four-point functions on the complex plane, \begin{align} \label{eq:TSDcorr} T^{(2)}(\rho_{1},\rho_{2}) \equiv\quad& \left\langle f_{\Gamma_{1,-l}\ket{1}}\left(e^{-\frac{i\pi l}{2L}}\right)f_{\Gamma_{1,l}\ket{1}}\left(e^{\frac{i\pi l}{2L}}\right)f_{\Gamma_{2,-l}\ket{1}}\left(-e^{-\frac{i\pi l}{2L}}\right)f_{\Gamma_{2,l}\ket{1}}\left(-e^{\frac{i\pi l}{2L}}\right)\right\rangle_{\mathbb{C}} \\ + &\left\langle f_{\Gamma_{1,-l}\ket{2}}\left(e^{-\frac{i\pi l}{2L}}\right)f_{\Gamma_{1,l}\ket{2}}\left(e^{\frac{i\pi l}{2L}}\right)f_{\Gamma_{2,-l}\ket{2}}\left(-e^{-\frac{i\pi l}{2L}}\right)f_{\Gamma_{2,l}\ket{2}}\left(-e^{\frac{i\pi l}{2L}}\right)\right\rangle_{\mathbb{C}} \nonumber\\ -& 2 \left\langle f_{\Gamma_{1,-l}\ket{1}}\left(e^{-\frac{i\pi l}{2L}}\right)f_{\Gamma_{1,l}\ket{1}}\left(e^{\frac{i\pi l}{2L}}\right)f_{\Gamma_{2,-l}\ket{2}}\left(-e^{-\frac{i\pi l}{2L}}\right)f_{\Gamma_{2,l}\ket{2}}\left(-e^{\frac{i\pi l}{2L}}\right)\right\rangle_{\mathbb{C}}\,.\nonumber \end{align} \noindent The trace square distance is manifestly positive and has the great advantage that we can compute it directly in terms of four-point correlators, i.e. there is no need to consider higher sheeted replica manifolds and we do not need to take any analytic continuations. Different trace distances between (not necessarily descendant) states in 2d CFT have e.g. be considered in \cite{Sarosi:2016oks,Zhang:2019wqo,Zhang:2019itb}. \section{Universal results from the vacuum representation} \label{sec:universal} Most physically interesting conformal field theories contain a unique vacuum that naturally corresponds to the identity field. For the vacuum all the above correlation functions to compute the quantum measures become basically trivial. However, the theories also contain the whole vacuum representation which for example consists of the state $L_{-2}\ket{0}$ that corresponds to the holomorphic part of the energy momentum tensor, $T(z)$. Correlation functions of vacuum descendant fields generically depend on the central charge of the theory and can in principle be computed explicitly using the Ward identities \eqref{eq:WardIdNpt} or \eqref{eq:rec1} recursively. Since all quantities discussed in section \ref{sec:qmeasures} can be expressed in terms of correlators, we can in principle compute all of them as closed form expressions, too. However, since we use computer algebra to perform the transformations and compute the correlation functions, computer resources are the biggest limiting factor. We, here, present results for all descendants up to conformal weight five and in some cases for the state $L_{-10}\ket{0}$\,. We, in particular, want to check how the measures depend on the conformal weights of the states and if states at the same conformal weight can be regarded as similar. \subsection{R\'enyi entanglement entropy}\label{sec:renyivac} Only for the first few excited states in the identity tower, the expressions \eqref{eq:RFE} to compute the second R\'enyi entanglement entropy are compact enough to display them explicitly. In case of the first descendant $L_{-2}\ket{0}$, i.e. the state that corresponds to the energy momentum tensor, we get \begin{align}\label{eq:RFE[-2]} F^{(2)}_{L_{-2}\ket{0}} &= \frac{c^2 \sin ^8(\pi x)}{1024}+\frac{c \sin ^4(\pi x) (\cos (2 \pi x)+7)^2}{1024}+\frac{\sin ^4(\pi x) (\cos (2 \pi x)+7)}{16 c}\\&\quad +\frac{16200 \cos (2 \pi x)-228 \cos (4 \pi x)+120 \cos (6 \pi x)+\cos (8 \pi x)+16675}{32768}\,,\nonumber \end{align} where we defined $x=l/L$\,. The results for the states $L_{-n} \ket{0}$ with $n=3,4,5$ are given in \ref{app:REresultsVac}. The results here agree with those in \cite{Taddia:2016dbm} when present. One important case is the limit of small subsystem size, i.e. when $x\ll 1$. In this limit to leading order any of the above $2n$-point functions \eqref{eq:RFE} decouple into $n$ $2$-point functions. This is because the operator product of a field and its conjugate includes the identity. Then, in the limit $x\to 0$ the respective identity block dominates and takes the form of a product of $n$ 2-point functions. Those two point functions are, however, given by the transition amplitude from the state to its dual on the $k$th sheet that decouples in the limit $x\to 0$ from all other sheets. The latter is simply given by the squared norm of the state, i.e. it gives one for normalized states. Hence, we can write \begin{align} \lim_{x\to 0} F_{\ket{s}}^{(n)} &= \prod\limits_{k=1}^n \lim_{x\to 0} \langle f_{\Gamma_{k,l} \ket{s}}\left(w(0_k)\right)f_{\Gamma_{k,-l} \ket{s}}\left(w(\infty_k)\right) \rangle_\mathbb{C}\\ &=\prod\limits_{k=1}^n \bra{s}s\rangle = 1\,. \end{align} \noindent Hence, to order $x^0$ the descendant does not play any role at all. For the next to leading order result there are expectations from primary excitations and the change of the entanglement entropy computed from holography. E.g. in \cite{Bhattacharya:2012mi} it is shown that the change should be proportional to the excitation energy and, in particular, should be independent from $c$. Expanding the explicitly shown results \eqref{eq:RFE[-2]},\eqref{eq:RFE[-3]}, \eqref{eq:RFE[-4]}, and \eqref{eq:RFE[-5]} we obtain \begin{equation} F_{L_{-n}\ket{0}}^{(2)} = 1 - \frac{n}{2}\left(\pi x\right)^2 + O\!\left(x^4\right)\,, \quad \text{for} ~ n = 2,3,4,5\,,\label{eq:RElowx} \end{equation} which is in agreement with all above expectations. \begin{figure}[t] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.45\textwidth]{"plots/RE2CorrVac2"}}; \node at (.47\textwidth,0) {\includegraphics[width=.45\textwidth]{"plots/RE2CorrVac3"}}; \node at (0,-4.7) {\includegraphics[width=.45\textwidth]{"plots/RE2CorrVac4"}}; \node at (.47\textwidth,-4.7) {\includegraphics[width=.45\textwidth]{"plots/RE2CorrVac6"}}; \node at (.7\textwidth,-1.5) {\includegraphics[width=.1\textwidth]{"plots/LegendRE2corrVac"}}; \node at (.4,1.5) {(a)}; \node at (7.5,1.5) {(b)}; \node at (.4,-3.2) {(c)}; \node at (7.5,-3.2) {(d)}; \end{tikzpicture} \caption{The correlator $F^{(2)}_{\ket{s}}$ for (a) $\ket{s} = L_{-2}\ket{0}$, (b) $\ket{s} = L_{-3}\ket{0}$, (c) $\ket{s} = L_{-4}\ket{0}$, (d) $\ket{s} = L_{-5}\ket{0}$, for several values of the central charge.} \label{fig:RE2lowA} \end{figure} In figure \ref{fig:RE2lowA} we show the results for $F^{(2)}_{\ket{s}}$ for the states $\ket{s} = L_{-n}\ket{0}$, $n=2,3,4,5$\,. The first observation is that at large $c$ the correlator shows an oscillating behaviour with oscillation period proportional to $1/n$. In fact, we can see this also from the explicit results \eqref{eq:RFE[-2]},\eqref{eq:RFE[-3]},\eqref{eq:RFE[-5]},\eqref{eq:RFE[-5]} where at large central charge the term proportional to $c^2$ dominates. Note, that the correlator $F^{(n)}$ can become larger than one at large central charge and, hence, its contribution to R\'enyi entropy $S^{(n)}$ can get negative. For example, in case of $n=2$ and $\ket{s} = L_{-2}\ket{0}$ this happens at $x=1/2$ for $c\gtrsim 18.3745$. \begin{figure}[t] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.45\textwidth]{"plots/RE2CorrVac5"}}; \node at (.47\textwidth,0) {\includegraphics[width=.45\textwidth]{"plots/RE2CorrVac7"}}; \node at (0,-4.7) {\includegraphics[width=.45\textwidth]{"plots/RE3CorrVac2"}}; \node at (.47\textwidth,-4.7) {\includegraphics[width=.45\textwidth]{"plots/RE3CorrVac3"}}; \node at (.25\textwidth,-2.1 ) {\includegraphics[width=.25\textwidth]{"plots/LegendRE2corrVacB"}}; \node at (.4,1.5) {(a)}; \node at (7.5,1.5) {(b)}; \node at (.4,-3.2) {(c)}; \node at (7.5,-3.2) {(d)}; \end{tikzpicture} \caption{The correlator $F^{(n)}_{\ket{s}}$ for (a) $n=2$, $\ket{s} = L_{-2}^2\ket{0}$ , (b) $n=2$, $\ket{s} = L_{-3}L_{-2}\ket{0}$, (c) $n=3$ $\ket{s} = L_{-2}\ket{0}$, and (d) $n=3$ $\ket{s} = L_{-3}\ket{0}$ for several values of the central charge.} \label{fig:RE2lowB} \end{figure} The vacuum module is degenerate at conformal weight $h=4$ and $h=5$. In addition to the states $L_{-4}\ket{0}$ and $L_{-5}\ket{0}$ there are the states $L_{-2}^2\ket{0}$ and $L_{-3}L_{-2}\ket{0}$, respectively. Their correlators $F^{(2)}_{\ket{s}}$ are shown in figure \ref{fig:RE2lowB} (a) and (b) for different values of the central charge. Interestingly, although their small subsystem behaviour is given by \eqref{eq:RElowx} and, hence, it is the same as for $L_{-4}\ket{0}$ and $L_{-5}\ket{0}$, respectively, their general behaviour is rather different at large central charge! Their oscillation period is not proportional to the conformal weight but proportional to the level of the lowest Virasoro generator appearing in it. Already these two examples show that in particular at large central charge the behaviour of the R\'enyi entropy and, hence, also of the entanglement entropy of descendant states does not only depend on their conformal weight, i.e. the \textit{energy of the state}, but also significantly on their building structure. In particular, theories with a (semi-)classical gravity dual need large central charge. It is widely believed that black hole microstates in $AdS_3$ correspond to typical high conformal dimension states in the CFT. However, a typical state at conformal dimension $\Delta\gg 1$ is a descendant at level $\Delta/c$ of a primary with conformal dimension $\tfrac{c-1}{c}\Delta$ (see e.g. \cite{Datta:2019jeo}). This means that a typical state will be a descendant at large but finite central charge $c$! The results we present here show that descendants with the same conformal dimension can in fact show very different behaviour when it comes to the entanglement structure. It will be interesting to further study the large $c$ limit, in particular for non-vacuum descendants, to analyse the holographic effect of these different behaviours. Finally, in figure \ref{fig:RE2lowB} (c) and (d) we show the correlator $F^{(3)}$ for the first two excited states $L_{-2}\ket{0}$ and $L_{-3}\ket{0}$. They show qualitatively the same behaviour as the respective correlators for $n=2$ (see figure \ref{fig:RE2lowA} (a) and (b)). However, their dependence on the central charge is stronger and the oscillating behaviour starts at lower $c$. For example, $F^{(3)}_{L_{-2}\ket{0}}$ is larger than one at $l=1/2$ for $c\gtrsim14.74945$. The stronger dependence on the central charge for larger $n$ is expected. Any $F^{(n)}_{\ket{s}}$ can be expanded as \begin{equation} F^{(n)}_{\ket{s}} = \sum_{k=-n+1}^{n} A_k^{(n)} c^k\,, \end{equation} where all the dependence on the state $\ket{s}$ and the relative subsystem size $x=l/L$ sits in the coefficients $A_k^{(n)}$ . The negative powers of $c$ originate from the normalization of the state. Positive powers of $c$ follow from the Virasoro commutation relations when using the Ward identities. Therefore, at large central charge we get \begin{equation} \left.F^{(n)}_{\ket{s}}\right|_{c\gg 1} \approx A_n c^n\,. \end{equation} \subsection{Sandwiched R\'enyi divergence} As argued in section \ref{sec:SRD} it is possible to express the sandwiched R\'enyi divergence~\eqref{eq:SRD} for integer parameters $n$ in terms of a $2 n$ point functions $\mathcal{F}^{(n)}$ \eqref{eq:SRDcorr} if $\rho_1$ is the reduced density matrix of the vacuum. In case of the state $L_{-2}\ket{0}$ we e.g. obtain \begin{align} \mathcal{F}^{(2)}_{L_{-2}\ket{0}} = & \frac{(\cos (4 \pi x)+7) (-512 \cos (4 \pi x)+128 \cos (8 \pi x)+384) \sec ^8(\pi x)}{16384 c}\\ &+\frac{(\cos (4 \pi x)+7) (847 \cos (4 \pi x)-22 \cos (8 \pi x)+\cos (12 \pi x)+1222) \sec ^8(\pi x)}{16384}\nonumber \end{align} where $x = l/L < 1/2$\,. Expressions for the $L_{-n}\ket{0}$, $n=3,4,5$ can be found in appendix \ref{app:SRDresultsvac}. \begin{figure}[t] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.3\textwidth]{plots/SRD2vac2.pdf}}; \node at (.3\textwidth,0) {\includegraphics[width=.3\textwidth]{plots/SRD2vac3.pdf}}; \node at (.6\textwidth,0) {\includegraphics[width=.3\textwidth]{plots/SRD2vac4.pdf}}; \node at (0,-.32\textwidth) {\includegraphics[width=.3\textwidth]{plots/SRD2vac6.pdf}}; \node at (.3\textwidth,-.32\textwidth) {\includegraphics[width=.3\textwidth]{plots/SRD2vac5.pdf}}; \node at (.6\textwidth,-.32\textwidth) {\includegraphics[width=.3\textwidth]{plots/SRD2vac7.pdf}}; \node at (.75\textwidth,-2) {\includegraphics[width=.075\textwidth]{plots/LegendSREcorrVac.pdf}}; \node at (-1.15,1.8) {(a)}; \node at (3.35,1.8) {(b)}; \node at (7.85,1.8) {(c)}; \node at (-1.15,-3) {(d)}; \node at (3.35,-3) {(e)}; \node at (7.85,-3) {(f)}; \end{tikzpicture} \caption{The Sandwiched R\'enyi Divergence for $n=2$ between the reduced groundstate and (a) $L_{-2}\ket{0}$, (b) $L_{-3}\ket{0}$, (c) $L_{-4}\ket{0}$, (d) $L_{-5}\ket{0}$, (e) $L_{-2}^2\ket{0}$, (f) $L_{-3}L_{-2}\ket{0}$ for different values of the central charge $c$.} \label{fig:SRE2} \end{figure} Again we first want to draw attention to the small subsystem behaviour of the sandwiched R\'enyi divergence. The results for the second SRD between the reduced vacuum state and all states up to conformal weight five show the small subsystem behaviour \begin{equation} \mathcal{S}^{(2)}_{\ket{s}} = \frac{2 h_s^2}{c} \pi^4 x^4 + \frac{2 h_s^2}{3c} \pi^6 x^6 + O(x^8)\,. \label{eq:SRDsmallx} \end{equation} \noindent Its small subsystem behaviour only depends on the central charge and the conformal weight of the respective state and is independent of the specific structure of the state! In case of $n=2$, the SRD diverges at $x=1/2$. We find the behaviour \begin{equation}\label{eq:srd_vac_divergence} \mathcal{F}^{(2)}_{\ket{s}} = \exp\left(\mathcal{S}^{(2)}_{\ket{s}}\right) = \frac{A_{\ket{s}}}{\pi^{4 h_s}\left(x-\frac12\right)^{4h_s}} \, , \end{equation} where the coefficient $A_{\ket{s}}$ depends on the specifics of the state. For states of the form $L_{-n}\ket{0}$ up to $n=10$ it takes the form \begin{equation} A_{L_{-n}\ket{0}} = \binom{2n-1}{n-2}^2\,. \end{equation} In figure \ref{fig:SRE2} we show the SRD for the first six excited states. All of them show a plateau at small values of $x$ that increases for larger $c$ and shrinks for higher energy. This is expected from the asymptotic result \eqref{eq:SRDsmallx}. Interestingly, although in the asymptotic regimes, i.e. at $x\to 0$ and $x\to1/2$, the second SRD for the states $L_{-2}^2\ket{0}$ and $L_{-3}L_{-2}\ket{0}$ behave similarly to the states $L_{-4}\ket{0}$ and $L_{-5}\ket{0}$ with the same conformal weight they look quite differently for intermediate regimes of $x$. They, in particular, show to be more sensible to the central charge. This shows again that descendant states at the same conformal dimension can behave quite differently, in particular at large central charge. In all plots so far the second SRD shows to be a convex function of the relative subsystem size $x = l/L$. However, in cases of small central charge it is not! I.e. there are regions of $x$ with $\frac{\partial^2 S^{(2)}}{\partial x^2} <0$. For example, in case of $\ket{s} = L_{-2}\ket{0}$ the second SRD is not convex for $c\lesssim 0.1098$\,. This shows that there are examples where the generalized version of the QNEC is not true! However, conformal field theories with central charges smaller than 1/2 are quite unusual. They cannot be part of the ADE classifiation of rational, unitary, modular invariant CFTs \cite{Cappelli:1986hf} but could e.g. be logarithmic \cite{Nivesvivat:2020gdj}. In figure \ref{fig:nonconvex} we show the second SRD for states $L_{-n}\ket{0}$ with $n=2,3,4,5,10$ and $c=1/1000$ to illustrate its non-convexity for all these states. \begin{figure}[t] \centering \includegraphics[width=.7\textwidth]{plots/SRDnonConvex.pdf} \caption{The second Sandwiched R\'enyi Divergence for the states $L_{-n}\ket{0}$, $n=2,3,4,5,10$, at central charge $c=1/1000$\,.} \label{fig:nonconvex} \end{figure} \subsection{Trace squared distance}\label{sec:vacSRD} Again only the expressions for the first few excited states are compact enough to display them explicitly. For example, the TSD between the vacuum and the state $L_{-2}\ket{0}$ is given by \begin{align} T^{(2)}_{L_{-2}\ket{0},\ket{0}} &= \frac{c^2 \sin ^8(\pi x)}{1024}-\frac{1}{512} c \sin ^6(\pi x) (\cos (2 \pi x)+15)+\frac{\sin ^4(\pi x) (\cos (2 \pi x)+7)}{16 c}\nonumber\\ &\quad+\frac{-32768 \cos (\pi x)+8008 \cos (2 \pi x)-228 \cos (4 \pi x)}{32768} \label{eq:TSD21}\\ &\quad+\frac{120 \cos (6 \pi x)+\cos (8 \pi x)+24867}{32768}\,,\nonumber \end{align} where we use the abbreviation $x = \frac{l}{L}$ again. Some other explicit expressions can be found in appendix \ref{app:TSDvacResults}. \begin{figure}[t] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.45\textwidth]{plots/TSDvac21.pdf}}; \node at (.47\textwidth,0) {\includegraphics[width=.45\textwidth]{plots/TSDvac31.pdf}}; \node at (0,-4.7) {\includegraphics[width=.45\textwidth]{plots/TSDvac41.pdf}}; \node at (.47\textwidth,-4.7) {\includegraphics[width=.45\textwidth]{plots/TSDvac32.pdf}}; \node at (.7\textwidth,-1.5) {\includegraphics[width=.1\textwidth]{plots/LegendTSDVac.pdf}}; \node at (-.6,1.5) {(a)}; \node at (6.5,1.5) {(b)}; \node at (-.6,-3.2) {(c)}; \node at (6.5,-3.2) {(d)}; \end{tikzpicture} \caption{The Trace Squared Distance between the reduced states of (a) the vacuum and $L_{-2}\ket{0}$, (b) the vacuum and $L_{-3}\ket{0}$, (c) the vacuum and $L_{-4}\ket{0}$, and (d) the states $L_{-3}\ket{0}$ and $L_{-2}\ket{0}$\, for different values of the central charge $c$.} \label{fig:TSDA} \end{figure} In the limit $x\to 0$ the reduced states have no support and, hence, must be trivial. Consequently, the trace square distance vanishes in this limit independently of the original states we choose. We checked the leading order in $x\ll1$ for all states up to conformal weight five and find the behaviour \begin{equation}\label{eq:tsd_vacuum_smallx} T^{(2)}_{s_1,s_2} = \frac{2+c}{16 c} (h_1-h_2)^2 \pi^4 x^4 + O(x^6)\,. \end{equation} \noindent We can see that to leading order, $x^4$, the TSD depends on the central charge and the difference in conformal weight of the two states. We also see that for large central charge the dependence on $c$ is negligible. In case of $h_1 -h_2 =0$ the TSD starts at order $x^8$ for small $x$. We e.g. obtain \begin{align} T^{(2)}_{L_{-2}^2\ket{0},L_{-4}\ket{0}} & = \frac{ (2 c+1)^2 \left(25 c^3+420 c^2+2444 c+4752\right) \pi ^8 x^8}{1600 c (c+8)^2} + O(x^{10}) \label{eq:TSDsmallxdeg1}\\ T^{(2)}_{L_{-3}L_{-2}\ket{0},L_{-5}\ket{0}} & = \frac{9 c \left(25 c^3+420 c^2+2444 c+4752\right) \pi ^8 x^8}{1024 (c+6)^2}+ O(x^{10})\,.\label{eq:TSDsmallxdeg2} \end{align} \noindent Albeit one common factor, the latter expression do not seem to show a straightforward dependence on the states. It also shows that the large $c$ behaviour is more subtle because the $x^8$ coefficient diverges as $c\to\infty$\,. In the opposite limit $x\to1$ the TSD can be computed easily because the states become pure. One obtains \begin{align} \lim_{x\to1} T^{(2)}_{\ket{s_1},\ket{s_2}} &= \frac{\text{Tr}(\ket{s_1}\bra{s_1}^2)+\text{Tr}(\ket{s_2}\bra{s_2}^2)-2 \text{Tr}(\ket{s_1}\bra{s_1}\ket{s_2}\bra{s_2})}{\text{Tr}(\ket{0}\bra{0}^2)}\\ &=2 \left(1- |\bra{s_1}s_2\rangle|^2\right)\equiv \mathcal{T}\,. \end{align} \noindent We can see that $0\le \lim_{x\to1} T^{(2)}(\rho_1,\rho_2)\le 2$ where we get the first equal sign iff $s_1=s_2$ and the second one iff the two states are orthogonal to each other. The explicit results up to conformal weight five show that the expansion around $x=1$ is given by \begin{equation} T^{(2)}_{\ket{s_1},\ket{s_2}} = \mathcal{T} \left(1 - \frac{h_1+h_2}{4} \pi^2 (x-1)^2 + O\!\left((x-1)^4\right) \right)\,. \end{equation} \noindent We can see that the behaviour of the TSD close to $x=1$ depends on the sum of conformal weights $h_1 + h_2$\,. This is in contrast to the small $x$ behaviour that depends on the difference. Let us, for example, consider the second TSD between the vacuum and $L_{-2}\ket{0}$ (see the explicit expression in \eqref{eq:TSD21}) and the second TSD between the vacuum and $L_{-3}\ket{0}$\, (see the explicit expression in \eqref{eq:TSD31}). From the difference of conformal weight we get $$T^{(2)}_{L_{-2}\ket{0},\ket{0}}(x) < T^{(2)}_{L_{-3}\ket{0},\ket{0}}(x)$$ for small $x$. However, from the sum of conformal weights we obtain $$T^{(2)}_{L_{-2}\ket{0},\ket{0}}(x) > T^{(2)}_{L_{-3}\ket{0},\ket{0}}(x)$$ for $x$ close to one. We immediately can conclude that there must be an odd number of values $x\in (0,1)$, which in particular means at least one, with $$T^{(2)}_{L_{-2}\ket{0},\ket{0}}(x) = T^{(2)}_{L_{-3}\ket{0},\ket{0}}(x).$$ \begin{figure}[t] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.45\textwidth]{plots/TSDvac54.pdf}}; \node at (.47\textwidth,0) {\includegraphics[width=.45\textwidth]{plots/TSDvac76.pdf}}; \node at (.25\textwidth,-2.3 ) {\includegraphics[width=.25\textwidth]{plots/LegendTSDVacB.pdf}}; \node at (-.6,1.5) {(a)}; \node at (6.5,1.5) {(b)}; \end{tikzpicture} \caption{The Trace Square Distance between the degenerate states at (a) $h_s= 4$, i.e. $L_{-4}\ket{0}$ and $L_{-2}^2\ket{0}$, and (b) $h_s = 5$, i.e. $L_{-5}\ket{0}$ and $L_{-3}L_{-2}\ket{0}$, for different values of the central charge $c$. } \label{fig:TSDB} \end{figure} We also visualise some of the results. In figure \ref{fig:TSDA} we show the second TSD between the vacuum $\ket{0}$ and $L_{-n}\ket{0}$ for $n=2,3,4$, and between the first two excited states in the vacuum module, $L_{-2}\ket{0}$ and $L_{-3}\ket{0}$\,. In all these examples only for small enough $c$ the TSD is a monotonic function for $x\in[0,1]$\,. At larger $c$ the function starts to meander and can get even bigger than 2, the maximum value of the TSD between pure states. However, the reduced density matrices are not pure and it is not a contradiction per se that the TSD behaves like this. Still, it is hard to interpret the quantity as a meaningful measure of distinguishability for large values of $c$ at intermediate values of the relative subsystem size $x=l/L$. In figure \ref{fig:TSDB} we show the TSD between the two degenerate states at conformal dimension $h_s=4$ and $h_s=5$ for different values of $c$. As expected from the results \eqref{eq:TSDsmallxdeg1} and \eqref{eq:TSDsmallxdeg2} we see a quite large flat region at small $x$. At $x\to1$ they converge to the TSD of the respective pure states. In the regions in between they show qualitatively the same behaviour as the other TSDs. For larger central charge they start to meander and at very large $c$ the term proportional to $c^2$ dominates, s.t. the TSD becomes very large, too. \section{Theory dependent results} \label{sec:nonuniversal} For non-vacuum descendant states, using relation \eqref{eq:rec1} recursively allows to express the correlation function of chiral descendants $f_{\ket{s_i}}$ as a differential operator acting on the correlation function of the respective primary fields \begin{equation}\label{eq:diff} \langle \prod_{i=1}^N f_{\ket{s_i}}(z_i) \rangle = \mathcal{D} \, \langle \prod_{i=1}^N f_{\ket{\Delta_i}}(z_i) \rangle \, . \end{equation} \noindent In general, $\mathcal{D}$ depends on the central charge of the CFT, on the conformal weights of the primary fields, and on the insertion points. As a differential operator it acts on the holomorphic coordinates. In appendix~\ref{app:PrimDesCorr} we provide a code to compute it analytically in Mathematica. If the correlation function of the primaries is known, then it is possible to compute the descendant correlator through~\eqref{eq:diff}. The correlators in \eqref{eq:RFE}, \eqref{eq:SRDcorr}, and \eqref{eq:TSDcorr} can be written as linear combinations of correlation functions of descendants with coefficients that follow from the respective conformal transformations, i.e.~the uniformization map \eqref{eq:uniformization} in case of the R\'enyi entropy and the trace square distance, and the usual M\"obius transformations \eqref{eq:Moebius} followed by a rotation in case of the sandwiched R\'enyi divergence. Combining this with \eqref{eq:diff} we can write each of the correlators as \begin{equation} D \bar{D} \langle \prod_{i=1}^N f_{\ket{\Delta_i}}(z_i) \rangle\,, \end{equation} with differential operators $D,\bar{D}$. Since we only consider chiral descendants $\bar{D}$ is simply given by the anti-chiral part of the transformation of primaries, \begin{equation} \bar{D} = \prod_{k=1}^n \bar{v}_{0;(k,l)}^{\bar{h}_k}\bar{v}_{0;(k,-l)}^{\bar{h}_k}\,. \end{equation} \noindent E.g. for the correlator of the $n$th R\'enyi entropy \eqref{eq:RFE} we simply get $\bar{D} = \sin^{4\bar{h}}(\pi x)$ from the uniformization map. In the following sections we explicitly show the expressions of the differential operators $D\bar{D}$ for the simplest descendant state $L_{-1}\ket{\Delta}$. We will then consider results for higher descendants by acting with the operators on particular primary four-point functions in two specific CFTs, the Ising model and the three-state Potts model. The Ising model is one of the simplest CFTs \cite{DiFrancesco:1997nk}. It is a unitary minimal model with central charge $c=1/2$ and contains three primary operators: the identity, the energy density $\varepsilon$ and the spin field $\sigma$, whose chiral conformal weights are $0$, $1/2$, $1/16$ respectively. The $2n$-point correlation functions on the plane of the $\varepsilon$ and $\sigma$ operators are known~\cite{DiFrancesco:1997nk} and, in particular, the four-point correlator of the energy density reads \begin{equation}\label{eq:isingen} \left\langle \varepsilon(z_1,\bar{z}_1) \ldots \varepsilon(z_4 ,\bar{z}_4) \right\rangle = \left| \frac{1}{(z_{12}z_{34})^2} + \frac{1}{(z_{13}z_{24})^2} + \frac{1}{(z_{23}z_{14})^2} \right| \end{equation} while the four-point correlator of the spin is given by \begin{equation}\label{eq:isingsig} \left\langle \sigma(z_1,\bar{z}_1) \ldots \sigma(z_4 ,\bar{z}_4) \right\rangle = \frac{1}{\sqrt{2}} \frac{1}{|z_{14} z_{23}|^{1/4} } \frac{\sqrt{1 + |\eta| + |1-\eta|}}{|\eta|^{1/4}} \,, \end{equation} where $z_{ij} = z_i - z_j$ and $\eta = z_{12}z_{34}/z_{13}z_{24}$ is the cross ratio. Given these expressions, it is possible to study the R\'enyi entanglement entropy and the quantum measures for various descendants of $\varepsilon$ and $\sigma$. The three-state Potts model is the unitary minimal model with $c= 4/5$~\cite{DiFrancesco:1997nk}. It can e.g. be realized as a particular class of the more general $N$-state clock model which enjoys $\mathbb{Z}_N$ symmetry. For $N=2$ one recovers the Ising model, while the case $N=3$ is equivalent to the three-state Potts model~\cite{Fradkin:1980th,Fateev:1985mm,Fateev:1987vh,Dotsenko:1984if}. Its operator content is richer than that of the Ising model. In particular, it contains six primary operators with conformal weight $0$, $2/5$, $7/5$, $3$, $1/15$, and $2/3$. The dimensions of the thermal operator $\varepsilon$ and the spin field $\sigma$ are $2/5$ and $1/15$ respectively. Again, a number of correlation functions between operators of the three-states Potts model are known (e.g.~\cite{Fateev:1985mm,Dotsenko:1984if}) and, since we will focus on descendants of the energy operator in the following, we provide here the four-point correlation function of the energy density \cite{Dotsenko:1984if}: \begin{align}\label{eq:pottsen} \left\langle \varepsilon(z_1,\bar{z}_1) \ldots \varepsilon(z_4 ,\bar{z}_4) \right\rangle &= \frac{1}{|z_{13} z_{24}|^{8/5}} \left[ \frac{1}{| \eta (1-\eta) |^{8/5}} \left| _2F_1\left( -\tfrac85, -\tfrac15; -\tfrac25; \eta \right) \right|^2 \right. \nonumber\\ & \phantom{=} \left. - \frac{ \Gamma\left(-\tfrac25\right)^2 \Gamma\left(\tfrac65\right)\Gamma\left(\tfrac{13}5\right)}{\Gamma\left(\tfrac{12}{5}\right)^2 \Gamma\left(-\tfrac15\right)\Gamma\left(-\tfrac85\right)} |\eta(1-\eta)|^{6/5} \left| _2F_1\left( \tfrac65, \tfrac{13}{5}; \tfrac{12}{5}; \eta \right) \right|^2 \right] \end{align} where $ _2F_1 $ is the hypergeometric function. \subsection{R\'enyi entanglement entropy} Let us first consider $F_{\ket{s}}^{(2)}$ with $\ket{s} = L_{-1}\ket{\Delta}$. As discussed above we can write \begin{equation}\label{eq:ree1prim} F_{\ket{s}}^{(2)} = \bar{D}^{F^{(2)}} D_{L_{-1}}^{F{(2)}} \, \left\langle f_{\ket{\Delta}}(e^{-\frac12 i\pi x}) f_{\ket{\Delta}}( e^{\frac12 i\pi x} ) f_{\ket{\Delta}}(- e^{-\frac12 i\pi x}) f_{\ket{\Delta}}( - e^{\frac12 i\pi x} ) \right\rangle_\mathbb{C} \end{equation} with $\bar{D}_{L_{-1}}^{F^{(2)}}= \sin ^{4 \bar{h}}(\pi x)$ and $D_{L_{-1}}^{F^{(2)}}$ can be computed to be \begin{align} D_{L_{-1}}^{F(2)} =& \, \frac{1}{64} \sin ^{4 h}(\pi x) \Big[ 4 h^2 (3 \cos (2 \pi x)+5)^2 + \frac{16 \sin ^4(\pi x)}{h^2} \partial_1 \partial_2 \partial_3 \partial_4 \nonumber\\ &+h e^{-\frac{7}{2} i \pi x} \left(3+e^{2 i \pi x}\right)^2 \left(-2 e^{2 i \pi x}+3 e^{4 i \pi x}-1\right) \left( \partial_2 - \partial_4 \right) \nonumber\\ &+ h e^{-\frac{9}{2} i \pi x} \left(1+3 e^{2 i \pi x}\right)^2 \left(2 e^{2 i \pi x}+e^{4 i \pi x}-3\right) \left( \partial_3 - \partial_1 \right) \nonumber\\ &+ 8 \sin ^2(\pi x) (3 \cos (2 \pi x)+5) \left( \partial_1\partial_2 + \partial_3\partial_4 - \partial_2\partial_3 - \partial_1\partial_4 \right)\nonumber\\ & -e^{-3 i \pi x} \left(2 e^{2 i \pi x}+e^{4 i \pi x}-3\right)^2 \partial_2 \partial_4 -e^{-5 i \pi x} \left(2 e^{2 i \pi x}-3 e^{4 i \pi x}+1\right)^2 \partial_1 \partial_3 \nonumber\\ & + \frac{1}{h}e^{-\frac{7}{2} i \pi x} \left(-1+e^{2 i \pi x}\right)^3 \left(3+e^{2 i \pi x}\right) \left( \partial_1 \partial_2 \partial_4 - \partial_2 \partial_3\partial_4 \right) \nonumber\\ & + \frac{1}{h} e^{-\frac{9}{2} i \pi x} \left(-1+e^{2 i \pi x}\right)^3 \left(1+3 e^{2 i \pi x}\right) \left( \partial_1\partial_3\partial_4 - \partial_1\partial_2\partial_3 \right) \Big]\,, \end{align} where $\partial_n$ is the partial differentiation w.r.t.~the $n$-th insertion point. Unfortunately already at level 2, the general expressions are too cumbersome to express them here explicitly. Given the four-point correlation functions~\eqref{eq:isingen}, \eqref{eq:isingsig}, \eqref{eq:pottsen}, we can compute $F_{L_{-1}\ket{\Delta}}^{(2)}$ from eq.~\eqref{eq:ree1prim} for $h= 1/2, \, 1/16$ in the Ising model and $h=2/5$ in the three-states Potts model. We performed the same computations for descendants up to level 3 and show the results in figure~\ref{fig:REEprim}; some analytic expressions are given in appendix~\ref{app:REEresultsIsing} and~\ref{app:REEresultsPotts}. \begin{figure}[tb] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.32\textwidth]{"plots/reeisingen"}}; \node at (.33\textwidth,0) {\includegraphics[width=.32\textwidth]{"plots/reeisingsigma"}}; \node at (.66\textwidth,0) {\includegraphics[width=.32\textwidth]{"plots/reepotts"}}; \end{tikzpicture} \caption{The correlator $F^{(2)}_{\ket{s}}$ for different descendants of $\ket{s} = \ket{\varepsilon}$ and $\ket{s} = \ket{\sigma}$ in the Ising model and $\ket{s} = \ket{\varepsilon}$ in the Potts model.} \label{fig:REEprim} \end{figure} In the Ising model, there is only one physical state in the module of the energy operator at each level up to level 3. A consequence is that $F_{L_{-2}\ket{\varepsilon}}^{(2)} = F_{L_{-1}^2\ket{\varepsilon}}^{(2)}$, even though $D^{F^{(2)}}_{L_{-2}} \neq D^{F^{(2)}}_{L_{-1}^2}$. The same happens at level 3 for the different descendant states $L_{-3}\ket{\varepsilon}$, $L_{-1}^3\ket{\varepsilon}$ and $L_{-2}L_{-1}\ket{\varepsilon}$. As expected we see this in our result. For $\sigma$ descendants, again there is only one physical state at level 2 and $F_{L_{-2}\ket{\sigma}}^{(2)} = F_{L_{-1}^2\ket{\sigma}}^{(2)}$, but at level 3 there are two physical states and $L_{-3}\ket{\sigma}$, $L_{-1}^3\ket{\sigma}$ and $L_{-2}L_{-1}\ket{\sigma}$ produce different REEs as shown in figure~\ref{fig:REEprim}. Notice that the REEs for the different descendants of $\sigma$ at level 3 have a similar behaviour for small values of $x$, but are clearly distinguishable for $x \sim 1/2$. For descendants states of the energy density of the three-states Potts model there is again only one physical state at level 2 and two physical states at level 3. Similarly to the case of descendants of $\sigma$ in Ising, we found that $F_{L_{-2}\ket{\varepsilon}}^{(2)} = F_{L_{-1}^2\ket{\varepsilon}}^{(2)}$ but the different descendants that we considered at level 3 produced different REEs, as plotted in figure~\ref{fig:REEprim}. Notice that also in Potts the behaviour for small $x$ is given by the level and not by the state configuration, while all the curves are distinguishable for $x\sim 1/2$. In particular, $F_{L_{-1}^3\ket{\varepsilon}}^{(2)}$ behaves more like $F_{L_{-1}\ket{\varepsilon}}^{(2)}$ than $F_{L_{-3}\ket{\varepsilon}}^{(2)}$ for $x\sim 1/2$, while the plot of $F_{L_{-2}L_{-1}\ket{\varepsilon}}^{(2)}$ is very similar to $F_{L_{-3}\ket{\varepsilon}}^{(2)}$. If we expand the analytic results for energy descendants in both the Ising and Potts model for small $x$, we find the behaviour \begin{equation}\label{eq:reesmallx} F_{L_{-n}\ket{\varepsilon}}^{(2)} = 1 - \frac{n + 2h_\varepsilon}{2} (\pi x)^2 + O(x^4) \quad h_\varepsilon= \left\{\begin{matrix}1/2& \text{Ising}\\ 2/5 & \text{Potts}\end{matrix}\right.\,, \quad n=1,2,3\,. \end{equation} \noindent This is in general expected, since for small subsystem size $z_1 \sim z_2$ and $z_3 \sim z_4$ and to first order the four-point function is $(h=\bar{h}=\Delta/2)$ \begin{equation} \left\langle f_{\ket{\Delta}}(z_1, \bar{z}_1) f_{\ket{\Delta}}(z_2, \bar{z}_2) f_{\ket{\Delta}}(z_3, \bar{z}_3) f_{\ket{\Delta}}(z_4, \bar{z}_4) \right\rangle_\mathbb{C} \simeq \frac{1}{| z_{12} z_{34}|^{4 h}}\,. \end{equation} \noindent Then, using this correlation function in~\eqref{eq:ree1prim} as well as in the corresponding equations for higher descendants and taking the small $x$ limit we reproduce precisely eq.~\eqref{eq:reesmallx}, which is the clear generalization of eq.~\eqref{eq:RElowx} in agreement with~\cite{Alcaraz:2011tn}. However, the leading behaviour of $F^{(2)}_{L_{-n} \ket{\sigma}}$ is different from the one outlined in~\eqref{eq:reesmallx}. This happens because in the OPE of two Ising spin operator there is an additional contribution, that is absent in the OPE of two energy operators or subleading in the case of Potts. Indeed, consider in general the OPE between two primary fields \begin{equation}\label{eq:ope_light} f_{\ket{\Delta_i}} (z_1,\bar{z}_1) f_{\ket{\Delta_i}}(z_2,\bar{z}_2) = \frac{1}{|z_{12}|^{4 h_i}} + \frac{C^k_{ii} \, f_{\ket{\Delta_k}}(z_2,\bar{z}_2)}{|z_{12}|^{4 h_i - 2 h_k}} \ldots \, , \end{equation} where we included the contribution from the lightest primary field $f_{\ket{\Delta_k}}$ in the module of $f_{\ket{\Delta_i}}$. Then, to this order the four-point function for $z_1 \sim z_2$ and $z_3 \sim z_4$ becomes \begin{align}\label{eq:corr_light} \left\langle f_{\ket{\Delta_i}}(z_1, \bar{z}_1) \ldots f_{\ket{\Delta_i}}(z_4, \bar{z}_4) \right\rangle_\mathbb{C} &\simeq \frac{1}{| z_{12} z_{34}|^{4 h_i}} + \frac{(C^{k}_{ii})^2}{|z_{12}z_{34}|^{4h_i - 2 h_k}} \frac{1}{|z_{24}|^{4 h_k}} \end{align} so that \begin{equation}\label{eq:reesmallxsubleading} F_{L_{-n}\ket{\Delta_i}}^{(2)} = 1 - \frac{n + 2 h_i}{2} (\pi x)^2 + \left(C^{k}_{ii}\right)^2 \! \left( \frac{ c (n-1)^2 + 4 n h_i + 2 n^2 (h_k -1) h_k }{ c (n-1)^2 + 4 n h_i }\right)^2 \! \left( \frac{\pi x}{2} \right)^{4 h_k} + \ldots \, . \end{equation} The second term is in general a subleading contribution, e.g.~in the Potts model $\varepsilon \times \varepsilon = \mathbb{I} + X$ with X having dimension $7/5$. However, due to the fusion rule $\sigma \times \sigma = \mathbb{I} + \varepsilon$ in Ising, in this case $h_k = 1/2$, and we see that the second term in~\eqref{eq:reesmallxsubleading} contributes to leading order. Indeed, eq.~\eqref{eq:reesmallxsubleading} with $C_{\sigma\sigma}^\varepsilon = \frac12$ correctly predicts the small $x$ behaviour of $F^{(2)}_{L_{-n}\ket{\sigma}}$ for $n=1,2,3$ that we computed (see appendix~\ref{app:REEresultsIsing}). Some results of the REE in the Ising and three-states Potts models were already considered in~\cite{Palmai:2014jqa,Taddia:2016dbm,Taddia_2013}; we checked that our code produces the same analytic results studied in these references. \subsection{Sandwiched R\'enyi divergence} Consider now the correlator $\mathcal{F}^{(2)}_{\ket{s}}$ related to the SRD as in eq.~\eqref{eq:SRDcorr} with $\ket{s} = L_{-1} \ket{\Delta}$. Then, we find \begin{equation}\label{eq:srdprimlvl1} \mathcal{F}^{(2)}_{\ket{s}} = \bar{D}^{\mathcal{F}(2)} D_{L_{-1}}^{\mathcal{F}(2)} \, \left\langle f_{\ket{\Delta}}(e^{- i\pi x}) f_{\ket{\Delta}}( e^{ i\pi x} ) f_{\ket{\Delta}}(- e^{- i\pi x}) f_{\ket{\Delta}}( - e^{ i\pi x} ) \right\rangle_\mathbb{C}\,. \end{equation} From the anti-chiral part of the conformal transformation we now obtain \begin{equation} \bar{D}^{\mathcal{F}(2)} = 2^{4 \bar{h}} \sin ^{4 \bar{h}}(\pi x) \end{equation} and the differential operator acting on the holomorphic coordinates reads \begin{align} D_{L_{-1}}^{\mathcal{F}(2)} &= \frac{2^{4h}}{h^2} e^{-2 i \pi x} \sin ^{4 h}(\pi x) \left[ 4 h^4 e^{2 i \pi (h+1) x} \left(e^{-2 i \pi x}\right)^h \right. \nonumber\\ &\phantom{=} \left. + 2 h^3 e^{i \pi x} \left(1 - e^{2 i \pi x}\right) ( \partial_1 + \partial_4 - \partial_2 - \partial_3 ) \right. \nonumber\\ &\phantom{=} \left. + h^2 \left(e^{2 i \pi x} - 1\right)^2 ( \partial_1\partial_4 + \partial_2\partial_3 -\partial_1\partial_2 -\partial_1\partial_3 -\partial_2\partial_4 - \partial_3\partial_4 ) \right. \nonumber\\ &\phantom{=} \left. +4 i h e^{2 i \pi x} \sin ^3(\pi x) ( \partial_1\partial_2\partial_3 + \partial_2\partial_3\partial_4 - \partial_1\partial_3\partial_4 - \partial_1\partial_2\partial_4 ) \right. \nonumber\\ &\phantom{=} \left. + \frac{1}{4} \left(e^{2 i \pi x} -1 \right)^4 e^{2 i \pi (h-1) x} \left(e^{-2 i \pi x}\right)^h \partial_1\partial_2\partial_3\partial_4 \right] \, . \end{align} \noindent We explicitly study the results for descendants up to level 3. The general expressions for $D$ are, however, again too cumbersome to show them here. With the four-point functions \eqref{eq:isingen}, \eqref{eq:isingsig}, \eqref{eq:pottsen} we compute $\mathcal{S}^{(2)}_{\ket{s}}$ for the descendants of the energy and spin primary states in Ising and of the energy state in Potts. The results are plotted in figure \ref{fig:SRDprim} and some closed expressions are given for descendants of the energy state of Ising in appendix~\ref{app:SRDising}. \begin{figure}[tb] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.32\textwidth]{"plots/srdisingen"}}; \node at (.33\textwidth,0) {\includegraphics[width=.32\textwidth]{"plots/srdisingsigma"}}; \node at (.66\textwidth,0) {\includegraphics[width=.32\textwidth]{"plots/srdpotts"}}; \end{tikzpicture} \caption{The Sandwiched R\'enyi Divergence between the reduced groundstate and different descendants of $\ket{\varepsilon}$ and $\ket{\sigma}$ in the Ising model and $\ket{\varepsilon}$ in the Potts model.} \label{fig:SRDprim} \end{figure} As expected, the SRDs start from 0 and diverge at $x=1/2$. We also see from the plots that for higher level descendants the SRD grows more rapidly. In the Ising model degenerate descendants of $\varepsilon$ at level~2 and~3 produce the same SRDs, while for degenerate descendants of $\sigma$ at level~3 we found three different expressions. However, the differences between the plotted results are so small that the three curves at level~3 overlap in figure~\ref{fig:SRDprim}. The same happens for descendants of $\varepsilon$ in the Potts model. Now, let us check the limit of small subsystem size. Consider the OPE between two primary fields ($h_i = \bar{h}_i = \Delta_i/2$) \begin{equation}\label{eq:ope_srd} f_{\Delta_i} (z_1,\bar{z}_1) f_{\Delta_i}(z_2,\bar{z}_2) = \frac{1}{|z_{12}|^{4 h_i}} + \frac{2 h_i c^{-1} \, T(z_2)}{z_{12}^{2h_i -2} \bar{z}_{12}^{2 h_i}} + \frac{2 h_i c^{-1} \, \bar{T}(\bar{z}_2)}{z_{12}^{2h_i} \bar{z}_{12}^{2 h_i - 2}} + \ldots \,, \end{equation} where for now we only included the leading contributions from the vacuum module. Then, if we insert this OPE in the four-point function for $z_1 \sim z_2$ and $z_3\sim z_4$ we obtain \begin{align}\label{eq:opecorr_srd} \left\langle f_{\ket{\Delta}}(z_1, \bar{z}_1) \ldots f_{\ket{\Delta}}(z_4, \bar{z}_4) \right\rangle_\mathbb{C} &\simeq \frac{1}{| z_{12} z_{34}|^{4 h}} + \frac{2 h^2 c^{-1}}{|z_{12}z_{34}|^{4 h}} \frac{z_{12}^2 z_{34}^2}{z_{24}^4} + \frac{2 h^2 c^{-1}}{|z_{12}z_{34}|^{4 h}} \frac{\bar{z}_{12}^2 \bar{z}_{34}^2}{\bar{z}_{24}^4} \, . \end{align} \noindent With this expression we can study the limit $x\to 0$ in~\eqref{eq:srdprimlvl1} and similar expressions for higher level descendants. We find \begin{equation}\label{eq:srdsmallxlaw} \mathcal{S}^{(2)}_{L_{-n}\ket{\Delta}} = \frac2c \left( n^2 + 2n h + 2 h^2 \right) ( \pi x)^4 + \ldots \, . \end{equation} \noindent Expanding our analytic results for descendants of the energy in Ising and Potts for $x\to 0$ we found perfect agreement with eq.~\eqref{eq:srdsmallxlaw}. For $\sigma$ descendants, however, the leading order contribution to the SRD in the limit $x\to 0$ is different. Indeed, if we think of the OPE as in~\eqref{eq:ope_light} with the correlator~\eqref{eq:corr_light}, then we find the following leading contribution in the SRD for $n=1,2,3$ \begin{equation}\label{eq:srdsmallxsigma} \mathcal{S}^{(2)}_{L_{-n}\ket{\Delta_i}} = \left(C^{k}_{ii}\right)^2 \left( \frac{ c (n-1)^2 + 4 n h_i + 2 n^2 (h_k -1) h_k }{ c (n-1)^2 + 4 n h_i }\right)^2 \left( \pi x \right)^{4 h_k} + \ldots \, . \end{equation} \noindent Since $h_k = 1/2$ for $\ket{\Delta_i} = \ket{\sigma}$ in the Ising model, we see that the contribution from the $\varepsilon$ channel dominates over the one from the energy momentum tensor in~\eqref{eq:srdsmallxlaw}. We checked that~\eqref{eq:srdsmallxsigma} with $C^\varepsilon_{\sigma\sigma} = 1/2$ correctly reproduce the $x\to 0$ limit of our results. It is interesting to consider also the opposite limit $x\to 1/2$ and see how the SRDs scale with the singularity. In this case, it is enough to consider the first contribution in the OPE~\eqref{eq:ope_srd}, but making the appropriate changes as with our insertion points $x\to 1/2$ means $z_1 \sim z_4$ and $z_2 \sim z_3$. Then, for $n=1,2,3$ we find the following expression \begin{equation} \mathcal{S}^{(2)}_{L_{-n}\ket{\Delta}} = \log \left( \frac{A_n}{ \pi^{-4 (2h + n)} \left( x -\frac12 \right) ^{-4 (2h + n)}} \right) + \ldots \end{equation} with \begin{equation} A_n = (-1)^{8 h} \left( \frac{(n-1)(3n-5)(3n-4) \frac{c}{2} + 4 \left( \frac{6^n}{3} -1 \right) h + 2(n+1)^2 h^2 }{ c(n-1)^2 + 4n h } \right)^2 \, . \end{equation} Notice that for $h\to 0$ we recover the same scaling as in~\eqref{eq:srd_vac_divergence}. In all the examples that we considered, the SRD proved to be a convex function of $x$, providing further evidence to the validity of the R\'enyi QNEC in two dimensions \cite{Moosa:2020jwt} for large enough central charge. \subsection{Trace square distance} Consider now the trace square distance between a primary state $\ket{\Delta}$ and its first descendants $L_{-1}\ket{\Delta}$. Then \begin{equation}\label{eq:TSDprim} T_{L_{-1}\ket{\Delta},\ket{\Delta}}^{(2)} = \bar{D}^{T(2)} D_{L_{-1}}^{T(2)} \, \left\langle f_{\ket{\Delta}}(e^{-\frac12 i\pi x}) f_{\ket{\Delta}}( e^{\frac12 i\pi x} ) f_{\ket{\Delta}}(- e^{-\frac12 i\pi x}) f_{\ket{\Delta}}( - e^{\frac12 i\pi x} ) \right\rangle_\mathbb{C} \, , \end{equation} where again the differential operator on the anti-holomorphic coordinates is simply given by the transformation factor \begin{equation} \bar{D}^{T(2)} = \sin ^{4 \bar{h}}(\pi x) \end{equation} while the differential operator on the holomorphic coordinates is given by: \begin{align} D_{L_{-1}}^{T(2)} &= \frac{1}{64} \sin ^{4 h}(\pi x) \left[ 4 (3 h \cos (2 \pi x)+5 h-4)^2 \right. \\ &\phantom{=} \left. + 2 e^{-\frac{5}{2} i \pi x} \left(2 e^{2 i \pi x}-3 e^{4 i \pi x}+1\right) (3 h \cos (2 \pi x)+5 h-8) \partial_1 \right. \nonumber\\ &\phantom{=} \left. + 2 e^{-\frac{3}{2} i \pi x} \left(2 e^{2 i \pi x}+e^{4 i \pi x}-3\right) (3 h \cos (2 \pi x)+5 h-8) \partial_2 \right. \nonumber\\ &\phantom{=} \left. + h e^{-\frac{9}{2} i \pi x} \left(1+3 e^{2 i \pi x}\right)^2 \left(2 e^{2 i \pi x}+e^{4 i \pi x}-3\right) \partial_3 \right.\nonumber\\ &\phantom{=} \left. - h e^{-\frac{7}{2} i \pi x} \left(3+e^{2 i \pi x}\right)^2 \left(-2 e^{2 i \pi x}+3 e^{4 i \pi x}-1\right)\partial_4 \right. \nonumber\\ &\phantom{=} \left. + 8 \sin ^2(\pi x) (3 \cos (2 \pi x)+5) \left( \partial_3\partial_4 - \partial_2\partial_3 - \partial_1\partial_4 \right) \right. \nonumber\\ & \phantom{=} \left. -e^{-3 i \pi x} \left(2 e^{2 i \pi x}+e^{4 i \pi x}-3\right)^2 \partial_2\partial_4 -4 e^{-i \pi x} (2 i \sin (2 \pi x)+\cos (2 \pi x)-1)^2 \partial_1\partial_3 \right. \nonumber\\ &\phantom{=} \left. + \frac{8}{h} \sin ^2(\pi x) (3 h \cos (2 \pi x)+5 h-8) \partial_1\partial_2 \right. \nonumber\\ &\phantom{=} \left. + \frac{16}{h} e^{\frac{i \pi x}{2}} \sin ^3(\pi x) (\sin (\pi x)+2 i \cos (\pi x)) \partial_2\partial_3\partial_4 \right. \nonumber\\ &\phantom{=} \left. + \frac{16}{h} e^{-\frac{1}{2} i \pi x} \sin ^4(\pi x) (1-2 i \cot (\pi x)) \left( \partial_1\partial_3\partial_4 - \partial_1\partial_2\partial_3 \right) \right.\nonumber\\ &\phantom{=} \left. \frac{1}{h}e^{-\frac{7}{2} i \pi x} \left(e^{2 i \pi x} -1 \right)^3 \left(3+e^{2 i \pi x}\right) \partial_1\partial_2\partial_4 + \frac{16}{h^2} \sin ^4(\pi x) \partial_1\partial_2\partial_3\partial_4 \right]\nonumber \end{align} \noindent Again, we limit ourselves to display this result, which is the simplest, since for higher descendants the expressions become much more involved. As in the previous cases, we computed $T^{(2)}_{L_{-n}\ket{\Delta},\ket{\Delta}}$ as in~\eqref{eq:TSDprim} for $n=1,2,3$ and for the degenerate states at level~2 and~3. Then, by using the four-point functions~\eqref{eq:isingen}, \eqref{eq:isingsig}, and \eqref{eq:pottsen} we obtained analytic expressions for the TSD between the primary state and its descendants for the energy and spin operators in the Ising model and for the energy in the three states Potts model. Figure~\ref{fig:TSDprim} shows the plots of the results, while in appendix~\ref{app:TSDising} and~\ref{app:TSDpotts} we provide some explicit expressions. \begin{figure}[tb] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.32\textwidth]{"plots/tsdisingen"}}; \node at (.33\textwidth,0) {\includegraphics[width=.32\textwidth]{"plots/tsdisingsigma"}}; \node at (.66\textwidth,0) {\includegraphics[width=.32\textwidth]{"plots/tsdpotts"}}; \end{tikzpicture} \caption{TSD between different descendants and their primary state $\ket{\varepsilon}$ and $\ket{\sigma}$ in the Ising model and $\ket{\varepsilon}$ in the Potts model.} \label{fig:TSDprim} \end{figure} In the Ising model we find that degenerate states of the energy density produce the same TSD w.r.t.~the primary state up to level~3. This again is as expected. For spin descendants instead this is not true at level~3, with $T^{(2)}_{L_{-3}\ket{\Delta},\ket{\Delta}} \neq T^{(2)}_{L_{-1}^3\ket{\Delta},\ket{\Delta}} \neq T^{(2)}_{L_{-2}L_{-1}\ket{\Delta},\ket{\Delta}}$. However, in the small and large subsystem size limits we see that these different expressions have the same behaviour, while they differ the most around $x\sim 1/2$. In the Potts model, TSDs between degenerate states at level~3 and the energy density are again different, but from the plots we see that the difference is barely visible, and in particular for $x\to0$ and $x\to 1$ it is negligible. If we study the small subsystem size limit, we can generically predict the behaviour of the TSD. Consider for instance the OPE between two primary states as given by~\eqref{eq:ope_srd} and the correlator as in~\eqref{eq:opecorr_srd}. Then, we find the following behaviour in the limit $x\to 0$ for $n=1,2,3$ \begin{equation}\label{eq:tsd_smallx} T^{(2)}_{L_{-n}\ket{\Delta},\ket{\Delta}} = \frac{2+c}{16 c} n^2 (\pi x)^4 + O(x^6) \end{equation} in agreement with the vacuum result in~\eqref{eq:tsd_vacuum_smallx} and in perfect agreement with the analytic results that we found in Ising and Potts models for energy descendants. However, for $\sigma$ descendants in the Ising model the next to leading order contribution as $x\to 0$ does not come from the energy momentum tensor but from the energy field $\varepsilon$ in the OPE. Indeed, consider again the OPE as in~\eqref{eq:ope_light} with the correlator~\eqref{eq:corr_light}, then the contribution to the TSD as $x\to 0$ for $n=1,2,3$ reads \begin{align}\label{eq:tsd_smallx_nonvacchannel} T^{(2)}_{L_{-n}\ket{\Delta},\ket{\Delta}} = \left(C_{hh}^{h_k}\right)^2 \left( \frac{2 n^2 (h_k-1)h_k }{c (n-1)^2 + 4 n h} \right)^2 \left(\frac{\pi x}{2} \right)^{4 h_k} + \ldots \, . \end{align} \noindent We see that this term dominates over the one one outlined in~\eqref{eq:tsd_smallx} for $h_k < 1$, which is the case for the Ising spin. We checked that~\eqref{eq:tsd_smallx_nonvacchannel} with $C^\varepsilon_{\sigma\sigma}$ perfectly matches the small $x$ behaviour of the results for $\sigma$ in appendix~\ref{app:TSDising}. Consider now the large subsystem size limit $x \to 1$. Then, with our coordinates we have $z_1 \sim z_4$ and $z_2 \sim z_3$ and by taking the OPE similarly as in~\eqref{eq:ope_srd} but with appropriate insertion points we find the behaviour \begin{equation} T^{(2)}_{L_{-n}\ket{\Delta},\ket{\Delta}} = 2 - (2h + \frac{n}{2}) \pi^2 (x-1)^2 + \ldots \end{equation} that agrees with the $x\to 1$ limit of the explicit results we found for descendants of the energy in Ising and Potts. Again, for $\sigma$ descendants we need to take into account the contribution from the lightest field in the OPE. We then find \begin{equation}\label{eq:tsd_largex} T^{(2)}_{L_{-n}\ket{\Delta},\ket{\Delta}} = 2 \left(C^{h_k}_{hh}\right)^2 C_n \left( \frac{\pi}{2} \right)^{4 h_k} (x-1)^{4 h_k} \, , \end{equation} where \begin{align} C_n &= \frac{c^2 (n-1)^4 + 4 c n(n-1) \left( 2(n-1) h + (1 - 2^{n-1}) h_k \right) }{\left( c(n-1)^2 + 4 n h \right)^2} \nonumber\\ &\phantom{=} + \frac{(4n)^2 h^2 - (2n)^3 h h_k + 2n^4(h_k -1)^2 h_k^2}{\left( c(n-1)^2 + 4 n h \right)^2} \, . \end{align} For $\sigma$ in the Ising model $h_k = 1/2$ and we see that the contribution from the $\varepsilon$ channel sums up with the leading correction in~\eqref{eq:tsd_largex}. Once this is taken into account, we correctly match the large $x$ limit of the $\sigma$ expressions in appendix~\ref{app:TSDising}. \section{Conclusion and outlook} In this work we showed how to systematically compute the R\'enyi entanglement entropy, the sandwiched R\'enyi divergence and the trace square distance of generic descendant states reduced to a single interval subsystem in a conformal field theory. In practice the computations can be performed with the help of computer algebra programs and with the implementation of a recursive function that computes any correlator of descendants as a (differential) operator acting on the correlator of the respective primaries. We explicitly computed the aforementioned quantum measures for rather low excitation in the vacuum module and for excitations of primaries in the Ising model and the three-state Potts model. In particular, from the results in the vacuum module we saw that degenerate descendant states only show equal behaviour for small subsystem sizes. At large central charge any of the above quantities behaved very different for degenerate states, as outlined already in sec.~\ref{sec:renyivac}. This may be a hint that even more generally the holographic R\'enyi entanglement entropy can be very different between degenerate descendant states. This analysis goes beyond the scope of the present paper, but can be tackled with the code we presented. We also checked explicitly if predictions from the generalized version of QNEC \cite{Lashkari:2018nsl,Moosa:2020jwt} are true for descendant states, namely that the sandwiched R\'enyi divergence is a convex function of subsystem size. In the Ising model and Potts model in all the cases we checked, the SRD is a convex function. Nonetheless, we could show that for small but positive central charge, the SRD of descendant states in fact becomes non-convex. However, as already stated in section \ref{sec:vacSRD} theories with central charge smaller than 1/2 are quite unusual. Many of the analytic expressions that we obtained are too large to show them explicitly. However, showing the results in the small subsystem size limit is possible and they are always in agreement with the expectations from taking the respective limits in the operator product expansion. We again want to state that one very particular result in this limit is that the differences of degenerate states is not visible. Only with larger and larger subsystem size the difference between degenerate states becomes visible (e.g. in the numerous plots we show). The existing code that led to our results is openly accessible and can be used to compute the former quantities for more descendant states or in different models. One could for example consider quasiprimary states, i.e. $sl_2$ invariant descendant states in the module and check if they behave special compared to generic descendant states. Other interesting states to study might be those that correspond to currents of the KdV charges (see e.g. \cite{Sasaki:1987mm,Brehm:2019fyy}). The code can also be modified easily to compute other (quantum information theoretical) quantities as long as it is possible to express them in terms of correlation functions. There is e.g. a so-called R\'enyi relative entropy (e.g. considered in \cite{Sarosi:2016oks}) that could be computed with the methods presented here. There are also various directions to exploit to improve the code, e.g. the possibility to use symmetries in the construction that might speed up the computations significantly. A faster and more efficient code allows to compute higher R\'enyi indices or higher descendants within reasonable time and without too much memory consumption. \subsubsection*{Acknowledgments} We thank Stefan Theisen for comments on the draft of the paper. MB is supported by the International Max Planck Research School for Mathematical and Physical Aspects of Gravitation, Cosmology and Quantum Field Theory. \newpage \section*{Abstract} {\bf } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Computing 3pt fcts on the plane} \textbf{Goal:\quad } Straight forward recipe to compute the three point function of descendants. \textbf{Notation and definition: \quad } We want to introduce a notation for the states/fields appearing in our expression. Consider the module $\mathcal{V}_h$, whose lowest weight state/primary has conformal weight $h$ and is denoted by $r = \ket{h}$. All descendant states are written as $r = \ket{h,\{(m_i,n_i)\}} = \prod_i L_{-m_i}^{n_i}\ket{h}$. The corresponding fields are given by \begin{equation} f_{ \ket{h,\{(m_i,n_i)\}}} = \prod_i \hat{L}_{-m_i}^{n_i} f_{\ket{h}}\,, \end{equation} where \begin{equation} \hat{L}_{-m} g(w) := \oint_{\gamma_w} \frac{dz}{2\pi i} \frac{1}{(z-w)^{m-1}} T(z) g(w) \end{equation} for any field $g$; $\gamma_w$ is a closed path surrounding $w$. In words: $\hat{L}_{-m} g(w)$ is the $m$th 'expansion coefficient' in the OPE of the energy momentum tensor $T$ with the field $g$. \sectionlineA \textbf{Some useful info:\quad } (1) The (chiral) energy-momentum tensor on the plane is given by \begin{equation} T(z) = \sum_{n\in\mathbb{Z}} L_n z^{-n-2}\,, \end{equation} where $L_n = \oint_{\mathcal{C}_0}\frac{dz}{2\pi}z^{n+1} T(z)$ are the Virasoro generators. (2) A correlation function is always defined with respect to a state. In radial quantization the state and its dual 'live' at the origin and at infinity.\footnote{More generally one can think of different in-state and out-state, that live in radial quantization at zero and infinity, respectively.} Usually one considers the vacuum $\ket{\Omega}$ and we do so in what follows, too. Now, consider a closed integral of the energy momentum tensor around infinity ($\rightarrow$ in particular there are no field insertions of any kind for $|z|> |\gamma_\infty|$) inside of a correlation function. It vanishes because \begin{equation} \oint_{\gamma_\infty} \frac{dz}{2\pi i} z^{-n-2} = -\oint_{\gamma_0}\frac{dt}{2\pi i} t^{n} = 0 \end{equation} for $n\neq -1$ and $_\infty\!\bra{\Omega}L_n = 0$ for $n\le1$. Respectively, a closed loop integral of $T$ around $0$ inside a correlation function vanishes because \begin{equation} \oint_{\gamma_0} \frac{dz}{2\pi i} z^{-n-2} = 0 \end{equation} for $n\neq -1$ and $L_n\ket{\Omega}_0 = 0$ for $n\ge-1$. This fact allows us to deform integral contours of $T(z)$ inside of correlation functions through any point, including $\infty$! \sectionlineB Now consider a meromorphic function $\rho(z)$ that has singularities at most at $z\in \left\{0,z_1,z_2,z_3,\infty\right\} $, i.e. at the insertion points and at the singular points of the energy momentum tensor. \textbf{Subgoal:\quad} We want to choose $\rho$ s.t. one can construct (recursive) relations between three point functions of descendants. For now let us choose \begin{equation} \rho(z) = (z-z_1)^{a_1} (z-z_2)^{a_2} (z-z_3)^{a_3} \end{equation} for $a_i\in\mathbb{Z}$, which is in particular regular at $0$. Now, consider the integral \begin{equation} \sum_{i=1}^3 \oint_{\gamma_{z_i}} \frac{dz}{2\pi i} \rho(z) \left\langle \left(T(z) g_i(z_i) \right) \prod_{j\neq i} g_j(z_j) \right\rangle = - \oint_{\gamma_\infty} \frac{dz}{2\pi i} \rho(z) \left\langle T(z) \prod_{j+1}^3 g_j(z_j) \right\rangle\,, \end{equation} where $g_j$ are some fields, e.g. some descendant fields. The r.h.s. vanishes for $a_1+a_2+a_3\le2$. Finally, we consider the functions \begin{equation} \rho_i(z) = \prod_{j\neq i} (z-z_j)^{a_j} \end{equation} for which we need the expansion around $z_i$, \begin{equation} \rho_i(z) = \sum_{n=0}^\infty \rho_i^{(n)} \, (z-z_i)^n \end{equation} \noindent Now, using the definition of $\hat{L}_m$ and the latter expansion we obtain \begin{equation} \sum_{i=1}^3 \sum_{n=0}^\infty \rho_i^{(n)} \left\langle\left(\hat{L}_{a_i+n-1}g_i(z_i)\right) \prod_{j\neq i} g_j(z_j)\right\rangle = 0\,. \end{equation} \noindent Note that, even if not written explicitly, the sums over $n$ do always terminate for descendant fields $g_i$. Choosing, $a_1,a_2,$ and $a_3$ differently gives different relation among correlation functions. \sectionlineA For example, consider $g_i$ to be primaries and choose $a_1 = 1-m$, $a_2 = 0 =a_3$ then above equation becomes the relation that we find in every CFT book: \begin{equation} \begin{split} 0 &= \left\langle \left(\hat{L}_{-m} g_1(z_1)\right)g_2(z_2) g_3(z_3)\right\rangle \\ &\quad+ (z_2-z_1)^{1-m}\left\langle g_1(z_1) \left( \hat{L}_{-1}g_2(z_2)\right) g_3(z_3)\right\rangle \\ &\quad + (m-1)(z_2-z_1)^{-m}\left\langle g_1(z_1) \left( \hat{L}_{0}g_2(z_2)\right) g_3(z_3)\right\rangle \\ &\quad+ (z_3-z_1)^{1-m}\left\langle g_1(z_1)g_2(z_2) \left( \hat{L}_{-1}g_3(z_3)\right) \right\rangle \\ &\quad + (m-1)(z_3-z_1)^{m}\left\langle g_1(z_1)g_2(z_2) \left( \hat{L}_{0}g_3(z_3)\right) \right\rangle \end{split} \end{equation} with $\hat{L}_{-1}g_i(z_i) = \partial_{z_i} g_i(z_i)$\,, $\hat{L}_0 g_i(z_i) = h_i g_i(z_i) $, and $\hat{L}_{n>0} g_i(z_i) = 0$ for primaries. \sectionlineB \noindent \textbf{Quastion: \quad} What is the most clever choice for $a_i$ to obtain nice relations? \noindent \textbf{Suggestion:\quad} Choose $a_1 = 1-m$, $a_2 = 1$, and $a_3=m$ (or permutations thereof). \todo[inline]{Why this choice? $a_1 = 1-m$ gives $L_{-m} g_1 ... = ... $, $a_2=1$ ensures that we do not increase the level of $g_2$ and $a_3=m$ is the highest possible power we can choose and decreases the level of the third field respectively.} With this choice we get\todo{Yet, I couldn't find a closed expression for $\rho_2^{(n)}$.} \begin{align} \rho_1^{(n)} &= (z_1-z_3)^{m-n} \left((z_1-z_2) \binom{m}{n}+(z_1-z_3) \binom{m}{n-1}\right)\,, n\ge0\,,\\ \rho_3^{(n)} &= -(z_3-z_1)^{-m-n+1} \left((z_1-z_3) \binom{1-m}{n-1}+(z_2-z_3) \binom{1-m}{n}\right)\,, n\ge0\,, \end{align} and the equation \begin{align} \rho_1^{(0)}\left\langle \left(\hat{L}_{-m}g_1(z_1)\right) g_2(z_2)g_3(z_3)\right\rangle & = \sum_{n=1}^{m+\min(3,\text{lvl}(g_1))} \rho_1^{(n)}\left\langle \left(\hat{L}_{-m+n}g_1(z_1)\right) g_2(z_2)g_3(z_3)\right\rangle \nonumber \\ &\quad + \sum_{n=0}^{\text{lvl}(g_2)} \rho_2^{(n)} \left\langle g_1(z_1) \left(\hat{L}_n g_2(z_2)\right) g_3(z_3) \right\rangle \\ &\quad + \sum_{n=0}^{\text{lvl}(g_3)-m-1} \rho_3^{(n)} \left\langle g_1(z_1) g_2(z_2) \left(\hat{L}_{m+n+1} g_2(z_2)\right) \right\rangle\,. \nonumber \end{align} \noindent In particular, the overall level on the r.h.s. is lower than that of the correlator on the l.h.s! Hence, if we know all the correlation functions of lower level, then above expression directly gives the l.h.s. (after using Virasoro commutation relation to sort the $L_{-n}$s and to get rid of the $L_n$ with $n>0$). \section{4pt function and EE in descendant states} We assume that the system is prepared in a pure state $\rho= \ket{s}\bra{s}$ for some state $s$. Let $\rho_A$ be the reduced density matrix on a finite interval $A = (-\frac{L}2,\frac{L}2)$. The replica trick shows that \begin{equation} \text{Tr} \rho_A^n = b_n \left\langle \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}}\right\rangle_s\,, \end{equation} where $\left\langle \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}}\right\rangle_s$ is the two point function of twist operators $\sigma^{(n)}$ with conformal weights $h_n = \frac{c}{12}\left(n-\frac{1}{n}\right) = \bar{h}_n$. Let the state be a descendant of the form \begin{equation} \ket{s} = \prod_i L_{-m_i}^{k_i} \ket{p}\,, \end{equation} where $\ket{p}$ is some primary state. As usual, the corresponding field is $f_{s}(z)=\prod_i \hat{L}_{-m_i}^{k_i} f_{p}(z)$. The in- and out-state are defined at $\pm i \infty$ and, for proper normatilasion of primary state, we have \begin{equation} \ket{s} = \lim_{w_1\to -i\infty} (2 i w_1)^{ h_s} f_{s}(w_1) \ket{0}\,. \end{equation} and \begin{equation} \bra{s} = \lim_{w_2\to +i\infty} (2 i w_2)^{ h_s} \bra{0} f_s(w_2) \end{equation} where $H_s = h_p + \sum_i k_i m_i$\,. Hence, we can write \begin{align} \left\langle \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}}\right\rangle_s &\equiv \bra{s} \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \ket{s} \\ &=\left. w_1^{2H_s} w_2^{2H_s} \left\langle f_s(w_2)\, \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} f_s(w_1) \right\rangle_0\right|_{w_1\to-i\infty,w_2\to+i\infty} \,. \end{align} \noindent Hence, we are interested in computing the four point function \begin{equation} \left\langle \left(\prod_i\hat{L}_{-m_i}^{k_i} f_{p}(w_2)\right) \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \left(\prod_i\hat{L}_{-m_i}^{k_i} f_{p}(w_1)\right) \right\rangle_0\,. \end{equation} \noindent To do so, we proceed as in the last section and introduce \begin{equation} \rho(z) = (z-w_2)^{a_1} \left(z+\frac{L}{2}\right)^{a_2}\left(z-\frac{L}{2}\right)^{a_3} (z-w_1)^{a_4} \end{equation} with $\sum a_i \le 2$\, and \begin{equation} \rho_i(z) = \frac{\rho(z)}{(z-z_i)^{a_i}} \equiv \sum_{n=0}^\infty \rho_i^{(n)} \cdot (z-z_i)^n\,. \end{equation} \noindent This allows us to introduce the following Ward identities for four arbitrary fields $g_i(z_i)$, $i=1,2,3,4$ \begin{equation}\label{eq:WardId4ptfct} \sum_{i=1}^4 \sum_{n=0}^\infty \rho_i^{(n)}\left\langle \left(\hat{L}_{a_i+n-1} g_i(z_i)\right) \prod_{j\neq i} g_j(z_j)\right\rangle \end{equation} \subsection{Simple descendants} We declare \begin{equation} f_s(w) = \hat{L}_{-m} f_{p}(w) \end{equation} a simple descendant. Using the Ward identity \eqref{eq:WardId4ptfct} with $g_1 = f_p,g_2=\sigma^{(n)}=g_3$\,, and $g_4=\hat{L}_{-m}f_p$ and the coefficients $a_1 = 1-m, a_2=a_3=a_4=0$ we obtain \begin{align} \left\langle \hat{L}_{-m} f_{p}(w_2) \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \hat{L}_{-m} f_{p}(w_1)\right\rangle_0 &= - \sum_{n=0,1} \rho_2^{(n)}\left\langle f_{p}(w_2) \hat{L}_{-1+n}\sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \hat{L}_{-m} f_{p}(w_1)\right\rangle_0\\ &\quad - \sum_{n=0,1} \rho_3^{(n)}\left\langle f_{p}(w_2) \sigma^{(n)}_{-\frac{L}{2}} \hat{L}_{-1+n}\sigma^{(n)}_{\frac{L}{2}} \hat{L}_{-m} f_{p}(w_1)\right\rangle_0 \\ &\quad - \sum_{n=0}^{m+1} \rho_4^{(n)}\left\langle f_{p}(w_2) \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \hat{L}_{-1+n}\hat{L}_{-m} f_{p}(w_1)\right\rangle_0 \end{align} with \begin{equation} \rho_i^{(n)} = (-1)^n \binom{m+n-2}{n} (z_i-z_1)^{1-m-n}\,. \end{equation} \noindent All terms on the r.h.s. can be reduced to derivative operators acting on the primary 4pt function. For this we use the derivative operator $\mathcal{L}_{-n}^i$ acting on a four-point function $G(z_k)=\left\langle O_1(z_1)O_2(z_2)O_3(z_3)O_4(z_4) \right\rangle$ of primaries $O_i(z_i)$ \begin{equation} \mathcal{L}_{-m}^i G(z_k) = \left[\sum_{j\neq i} \frac{(m-1)h_j}{(z_j-z_i)^{m}} -\frac{\partial_{z_j}}{(z_j-z_i)^{m-1}}\right]G(z_k)\,, \end{equation} for $m>1$. \noindent With this, and now $G(z_k)= \left\langle f_{p}(z_1) \sigma^{(n)}_{z_2} \sigma^{(n)}_{z_3} f_{p}(z_4)\right\rangle_0$, we get \begin{align} \sum_{n=0,1} \rho_2^{(n)} \left\langle f_{p}(w_2) \hat{L}_{-1+n}\sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \hat{L}_{-m} f_{p}(w_1)\right\rangle_0 = (\rho_2^{(0)} \partial_{z_2} + \rho_2^{(1)} h_n) \mathcal{L}_{-m}^4 G(z_k)\,, \end{align} and \begin{align} \sum_{n=0,1} \rho_3^{(n)} \left\langle f_{p}(w_2) \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \hat{L}_{-1+n} \hat{L}_{-m} f_{p}(w_1)\right\rangle_0 = (\rho_3^{(0)} \partial_{z_3} + \rho_3^{(1)} h_n) \mathcal{L}_{-m}^4 G(z_k)\,. \end{align} \noindent For the last term we use \begin{align} \sum_{n=0}^{m+1} \rho_4^{(n)} \hat{L}_{-1+n}\hat{L}_{-m} f_{p}(z_4) &= \sum_{n=2}^{m-1} \rho_4^{(n)} (m+n-1) \hat{L}_{-m+n-1} f_{p}(z_4) \\ &\quad + \rho_4^{(0)} \hat{L}_{-1} \hat{L}_{-m} f_{p}(z_4) + \rho_4^{(1)} (h_p+m)\hat{L}_{-m} f_{p}(z_4) \\ &\quad+ \rho_4^{(m)} (2m-1) \partial_{z_4} f_{p}(z_4) \\ &\quad ++ \rho_4^{(m+1)} \left(2m h_p + \frac{c}{12}(m^3-m)\right)f_{p}(z_4) \end{align} \noindent Thus, we get \begin{align} \sum_{n=0}^{m+1} \rho_4^{(n)}\left\langle f_{p}(w_2) \sigma^{(n)}_{-\frac{L}{2}} \sigma^{(n)}_{\frac{L}{2}} \hat{L}_{-1+n}\hat{L}_{-m} f_{p}(w_1)\right\rangle_0 &= \sum_{n=2}^{m-1} \rho_4^{(n)}(m+n-1) \mathcal{L}_{-m+n-1}^4 G(z_k) \\ &\quad + \rho_4^{(0)} \partial_{z_4} \mathcal{L}_{-m}^4 G(z_k) \\ &\quad+ \rho_4^{(1)} (h_p+m)\mathcal{L}_{-m}^4 G(z_k) \\ &\quad+ \rho_4^{(m)} (2m-1) \partial_{z_4} G(z_k) \\ &\quad+ \rho_4^{(m+1)} \left(2m h_p + \frac{c}{12}(m^3-m)\right) G(z_k)\nonumber \end{align} \subsection{Vacuum descendants} In case of the vacuum, things become a bit simpler because $\hat{L}_m \mathds{1}= 0$ for $m\ge-1$. In addition we only need to act on the two point function at the end which is completely fixed, i.e. $G(z_k) \equiv G(z_2,z_3)= (z_3-z_2)^{-2h_n}$. Thus, the derivative operator from above can be simplified to \begin{equation} \mathcal{L}_{-m}^{i=1,4} G(z_k) = \left[\sum_{j=1,2} \frac{(m-1)h_j}{(z_j-z_i)^{m}} -\frac{\partial_{z_j}}{(z_j-z_i)^{m-1}}\right]G(z_k)\,, \end{equation} such that we can write \begin{align} \left\langle \left(\hat{L}_{-m}\mathds{1}\right)\!(z_1) \sigma^{(n)}_{z_2} \sigma^{(n)}_{z_3} \left(\hat{L}_{-m}\mathds{1}\right)\!(z_4)\right\rangle_0 &= \mathcal{L}_{-m}^1\left[\mathcal{L}_{-m}^4 G(z_2,z_3)\right] \\ & \quad - \sum_{n=0}^{m-1} \rho_4^{(n)}(m+n-1) \mathcal{L}_{-m+n-1}^4 G(z_2,z_3)\\ & \quad - \rho_4^{(m+1)} \frac{c}{12}\left(m^3-m\right) G(z_2,z_3) \end{align} \section*{Response to the referee} Thank you for the detailed report and your suggestions for improvement of the manuscript. \\[1em] We corrected the two typos and changed the comment after eq.~(19). \\[1em] About your comment concerning indistinguishability at large central charge: We agree that the differences we saw for low vacuum descendants do not allow direct conclusions on the behaviour of descendants of heavy primaries. We therefore removed our comments in this direction. We still expect that the difference in the differential operator that is acting on the 4pt function is of order $c^k$ for different descendants, even at the same level. For the second R\'enyi entropy this would lead to a difference proportional to $\log(c)$. This would mean that the difference is only $\log(c)/c$ suppressed. However, we plan to look at this in more detail in future work. \\[1em] About your question concerning the Sandwiched R\'enyi divergence blowing up at large subsystem sizes: The quantity in particular diverges if the support of the density matrices raised to the particular powers does not coincide. When and how this happens, in particular if the states are prepared by a Euclidean path integral, is argued on page 8 and 9 in 2007.15025 (reference [46], or [53] in the previous version, in our manuscript). We prefer not to present these arguments in our paper but instead added a footnote on page 11 that highlights this reference. \end{document}
1901.04659
\section{Introduction} Owing to the chemical and structural versatility of their building blocks, colloidal materials can be designed to assemble into a variety of microstructures. One motif that may enable functionality is chains or strings of colloids (so-called "colloidomers"\cite{DropletFJC}). When percolated, these structures can serve as a template for conductivity, facilitating photonic and electronic transfer along the connected interparticle network.~\cite{Tang1D, Velev} Mechanically stable particle networks with high surface-to-volume ratio are also interesting morphologies for nanoporous catalysts,~\cite{Catalyst, catalysis} and open network structures may be advantageous for applications that require materials that can dynamically reconfigure in response to a stimulus. Self-assembling colloidal strings have typically been designed by choosing building blocks with anisotropic interactions commensurate with the targeted morphology. For instance, thin chains of particles have been assembled \emph{in silico} from hard spheres, each decorated with two colinear attractive patches.~\cite{PatchyDuguet, SciortinoJCP2007, BethLinker, SciortinoPatchy} Such directionally specific interactions have been engineered in practice by grafting appropriate functional groups (e.g., DNA) to the surface of colloids.~\cite{DropletFJC, Jasna, PBS} Analogous physics can be achieved via short-range anisotropic dipolar interactions, which have been shown to promote linear chain growth between charged gold nanoparticles.~\cite{chargeAu} Strong dipole interactions have been argued to be the driving force behind the self-organization of other nanoparticles into one-dimensional chains~\cite{Tang237, Dipole1, Dipole2} and three-dimensional percolated fractal chain networks.~\cite{Lin3D} Anisotropic colloid shape can similarly be tailored to obtain flexible colloidal chains or stringy structured fluids.~\cite{LockKey,Polloids} For instance, Sacanna and coworkers employed Fischer's lock and key recognition mechanism between a homogeneous sphere and a sphere with a cavity to assemble compact clusters as well as more complex and flexible colloidal polymers. Particle and interaction anisotropy can also be induced by the assembly process itself. Under certain experimental conditions, spherical and uniformly grafted nanoparticles in a homopolymer matrix self-assembled into linear chains.~\cite{Akcora} The short-ranged depletion attraction in these systems is argued to be counterbalanced by the entropy of distortion which arises when the grafted brushes on two nanoparticles compress due to steric constraints upon approach. This can lead to an anisotropic distribution of the local graft density~\cite{Akcora, Arya} imparting an amphiphilic character to the nanoparticles. Such anisotropic assembly of uniformly grafted nanoparticles has also been predicted via theory as well as simulations where the ligands are modeled explicitly.~\cite{Bedrov, Samanvaya, Jiao, Koerner, Victor, Lafitte, Pana} A few attempts have been made to design an isotropic pair potential that causes a single-component fluid of particles to self-assemble into stringlike structures. To this end, Rechtsman et al~\cite{Rechtsman} proposed a complex "five-finger potential" containing five repeating attractive wells at intervals set by the particle diameter that are separated by repulsive barriers that inhibit formation of compact objects. In two dimensions, simpler potentials, though still possessing competing attractions and repulsions, have been shown to generate stringy structures. For example, coarse-graining multicomponent simulations of grafted nanoparticles revealed several single-component isotropic pairwise potentials that promote self-assembly of distinct morphologies: dispersed particles, long strings, and a percolated network.~\cite{Lafitte} Moreover, a class of potentials characterized by a single attractive well followed by a repulsive barrier furnished by a piecewise function of linear components has displayed different microstructures, ranging from monomers to aggregates to short strings to a labyrinthine chain network, as a function of area fraction and range of the repulsion.~\cite{haw} Common to the above studies is the presence of competing interactions, such as a short-range attractive and long-range repulsive (abbreviated SALR) potential, also known to promote the self-assembly of more compact particle clusters. The repulsive interactions in such potentials naturally limit the aggregation that would otherwise be promoted by the attractions. Self-assembly of clusters and strings requires growth that is self-limited--for clusters with respect to their overall size and for strings with respect to the dimension normal to growth. A few studies further reinforce potential connections between compact and elongated cluster morphologies. For instance, it was shown in both experiment~\cite{Bartlett} and simulation~\cite{Sciortino1D,Bolhuis} that both clusters and thick ramified structures are possible when systems possess competing SALR interactions. Similarly, in simulation, an SALR potential was shown to produce percolating states with a mixture of transient filamentous and spherical aggregates when the packing fraction exceeded $0.148$.~\cite{Cardinaux} Motivated by the investigations described above, the aim of this paper is two-fold. The first is to use methods of inverse design~\cite{Torquato, Inverse-Jain} (specifically a recent strategy~\cite{RyanRE} based on relative entropy coarse-graining~\cite{Shell,Shell2}) to discover isotropic potentials that promote self-assembly of one-monomer wide chains or compact clusters, respectively, in a one-component system of spherical particles. The second aim is to identify, on the basis of the designed interactions, a simpler model pair potential that favors assembly of these and related structures as a function of the length scales of the competing attractive and repulsive interactions. The balance of this paper is structured as follows. Sect.~\ref{sec:methods} outlines the relative entropy based method we adopt for inverse design and presents details of the molecular simulations. The pair potentials and structures resulting from the inverse design for both compact clusters and strings are described in Sect.~\ref{sec:IO}. Motivated by the qualitative forms of the designed interactions, Sect.~\ref{sec:UP} introduces a simpler related pair potential and explores the various morphologies that it favors as a function of its parameters. Conclusions and possible directions for future research are presented in Sect.~\ref{sec:conclusions}. \section{Methods} \label{sec:methods} \subsection{Relative Entropy Coarse-Graining} \label{subsec:RE} Relative entropy (RE) coarse-graining,~\cite{Shell,Shell2} also known as likelihood maximization in probability and statistics, is used in this work to obtain isotropic potentials capable of self-assembling particles into different target structures. Commonly applied to obtain a reduced dimensionality description of complex molecules for simulation, RE course-graining has more recently been used to design isotropic pair interactions that lead to self-organization of a rich variety of equilibrium structures including fluidic clusters,~\cite{RyanIC,RyanRE} porous mesophases,~\cite{BethPores,RyanRE} and crystalline lattices.~\cite{BL_RE,BethFK,RyanRE} In brief, the RE course-graining protocol considers a target ensemble of particle configurations that collectively exhibits a desired structural motif (e.g., strings or compact clusters), discussed below. The optimized interactions are those that maximize the overlap of the probability distribution for configurations at equilibrium with that of the target ensemble. Here, we consider an isotropic pair potential, $U(r|\boldsymbol{\theta})$, with $m$ tunable parameters $\boldsymbol{\theta} = [\theta_{1}, \theta_{2}, \cdots, \theta_{m}]$. According to RE coarse-graining, the parameters are updated in an iterative manner via \begin{equation} \label{eqn:RE_eqn} \boldsymbol{\theta}^{k+1} = \boldsymbol{\theta}^{k} + \alpha \int_{0}^{\infty} r^2 [g(r|\boldsymbol{\theta}^{k}) - g_{\text{tgt}}(r)][\nabla_{\boldsymbol{\theta}}{\beta}U(r|\boldsymbol{\theta})]_{\boldsymbol{\theta} = \boldsymbol{\theta}^{k}} dr \end{equation} where $\beta=(k_{B}T)^{-1}$, $k_{B}$ is the Boltzmann constant, $T$ is temperature, $\alpha$ is the learning rate, $g(r|\boldsymbol{\theta}^{k})$ is the radial distribution function of the system in the $k^{th}$ iterative step of the optimization, and $g_{\text{tgt}}(r)$ is the radial distribution function of the target ensemble. In practice, $g(r|\boldsymbol{\theta}^{k})$ is obtained from the equilibrium particle configurations of a molecular simulation using $U(r|\boldsymbol{\theta}^{k})$.~\cite{RyanRE} A rigorous mathematical derivation of the above update scheme is reviewed in Refs.~\citenum{RyanRE},~\citenum{BL_RE}, and~\citenum{William}. The outcome of a successful optimization is a thermally non-dimensionalized interaction ${\beta}U_{{\text opt}}(r)$ that results in an equilibrium structure that closely mimics that of the target ensemble. \subsection{Design of Targets} \label{subsec:target} The first step in the inverse design protocol described above is the construction of an ensemble of target configurations from which $g_{\text{tgt}}(r)$ can be computed. Target distributions can be specified in any way that yields an ensemble of desired configurations with convergent statistics. Here, we employ equilibrium statistical mechanics via canonical-ensemble molecular dynamics simulations with $N$ spherical particles of diameter $\sigma$ in a periodically replicated cubic cell of side length $L$ [i.e., packing fraction $\eta = N \pi \sigma^3/(6L^3)$] at temperature $T$. Dimensionless (generally many-body) interparticle potentials $\beta V$, given below, are selected for each target ensemble to yield configurations characteristic of the desired morphology. \subsubsection{Clusters} \label{subsec:tgt_cluster} Target ensembles of monodisperse, compact clusters of size $N_{\text{tgt}} = 2$ (dimer), $4$ (tetramer) and $8$ (octamer) are generated via molecular dynamics simulations in the canonical ensemble at packing fraction $\eta = 0.025$. The following interactions are chosen to mimic the desired target morphology. Particles of diameter $\sigma$ interact with hard-sphere-like repulsions represented via a Weeks-Chandler-Andersen (WCA) potential $V_{\text{\tiny{WCA}}}(r)$. \begin{equation} \label{eqn:WCA} V_{\text{\tiny{WCA}}}(r) = \left\{ \begin{array}{ll} 4 {\varepsilon_{w}}\bigg[ \bigg(\dfrac{\sigma}{r} \bigg)^{12} - \bigg( \dfrac{\sigma}{r} \bigg)^{6} \bigg] + {\varepsilon_{w}}, & r \leq 2^{1/6}\sigma \\ [15pt] 0, & r > 2^{1/6}\sigma \\ \end{array} \right. \end{equation} where $\beta\varepsilon_{w} = 5$. Particles are assigned to a particular cluster, the compactness of which is enforced by applying an additional finitely extensible non-linear elastic spring (FENE) potential $V_{\text{\tiny{FENE}}}$ between each particle in the cluster \begin{equation} \label{eqn:FENE} V_{\text{\tiny{FENE}}}(r) = \left\{ \begin{array}{ll} -\dfrac{1}{2}kr_{0}^{2}\text{ln}\bigg[1 - \bigg( \dfrac{r}{r_{0}} \bigg)^{2} \bigg], & r \leq r_{0} \\ [15pt] \infty, & r > r_{0} \\ \end{array} \right. \end{equation} with $k$ = $30 k_{\text{B}}T/\sigma^{2}$, and $\text{r}_0 = 1.5\sigma$ ($N_{\text{tgt}} = 2$) or $5.0\sigma$ ($N_{\text{tgt}} = 4, 8$). Additionally, a minimum distance of separation between the clusters is ensured by introducing an isotropic Yukawa repulsion between particles in different clusters \begin{equation} \label{eqn:Yukawa} V_{\text{\tiny{Yukawa}}}(r) = \left\{ \begin{array}{ll} \varepsilon_{y}\dfrac{\text{exp}(-\kappa{r})}{r}, & r < r_{cut} \\ [15pt] 0, & r \geq r_{cut} \\ \end{array} \right. \end{equation} where $\beta\varepsilon_{y} =$ 30, 10, and 0 for $N_{\text{tgt}} =$ 2, 4, 8, respectively, $\kappa = 0.5{\sigma}^{-1}$, and $r_{cut} = L/2.5$. After ensuring equilibration, the target radial distribution function $g_{\text{tgt}}(r)$ for $N_{\text{tgt}} = 2$, $4$, $8$ sized cluster fluids is computed. \subsubsection{Strings} \label{subsec:tgt_string} For strings, we created the target ensemble from simulations of linear particle chains of molecular weight $N_{\text{tgt}}$ = 10. Monomers interact via the repulsive WCA potential of Eq.~\ref{eqn:WCA} with $\beta\varepsilon_{w} = 1$, and adjacent beads interact via the FENE spring potential $V_{\text{\tiny{FENE}}}$ of Eq.~\ref{eqn:FENE} with $k = 30 k_{\text{B}}T/\sigma^{2}$ and $r_{0} = 1.5\sigma$. Non-bonded monomers also interact with the Yukawa potential $V_{\text{\tiny{Yukawa}}}$ of Eq.~\ref{eqn:Yukawa} with $\beta\varepsilon_{y} = 30$, $\kappa = 0.5{\sigma}^{-1}$, and $r_{cut} = L/3.0$. The system of linear chains is allowed to evolve via molecular dynamics and, upon equilibration, $g_{\text{tgt}}(r)$ is calculated at different packing fractions $\eta$ = 0.05, 0.1 and 0.15. \subsection{Simulation Details} \label{simulation} Molecular dynamics simulations for the target structures and the optimization are performed in the canonical ensemble with a periodically replicated cubic simulation cell. The software package HOOMD-blue 2.3.4~\cite{Hoomd1, Hoomd2} is used to generate the target configurations of particles of mass $m$. A time step of $dt = 0.005\sqrt{\sigma^{2}m/k_{B}T}$ is adopted and the Nos\`{e}-Hoover thermostat with a time constant of $\tau = 50dt$ is employed. For target compact clusters of $N_{\text{tgt}} = 2$, $4$, and $8$ at a packing fraction of $\eta = 0.025$, $N = 384$ particles are used in a periodic cell of size $L = 20\sigma$. Target structures for strings are generated using $N = 320$, $420$ and $630$ particles in cubic cells of dimension $L = 15\sigma$, $13\sigma$, and $13\sigma$ for different packing fractions of $\eta$ = 0.05, 0.10 and 0.15. Simulations within the iterative RE optimization as well as for the forward runs with the optimized potentials were performed in GROMACS 5.1.2 with a time step of $dt = 0.001\sqrt{\sigma^{2}m/k_{B}T}$. Constant temperature was maintained using the velocity-rescale thermostat with a time constant $\tau = 50dt$. The phase diagram of Sect.~\ref{sec:UP} was generated using $N = 968$ particles in a box size $L = 15\sigma$ (i.e., $\eta = 0.15$), collecting configurations for $3\times10^7$ time steps. In order to extract improved cluster statistics as well as to check for finite-size effects, simulations with the larger box size of $L = 30\sigma$ with 7744 particles were also performed for select state points. For these same state points, simulations were also conducted with different initial configurations (randomly placed particles, compact aggregates, and linear rod-like clusters) to test the robustness and sensitivity of the final structures as well as to ensure equilibrium is attained. The simulation snapshots were rendered using the Visual Molecular Dynamics~\cite{vmd} software. \subsection{Structural Analysis} \label{subsec:StructuralAnalysis} The structures obtained from the simulations described above are characterized by the cluster size distribution (CSD), the distribution of the number of nearest neighbors, the fractal dimension of the resulting aggregates $d_f$, and a percolation analysis. The CSD quantifies the fraction of clusters that contain $n$ particles, where a particle is considered to belong to a cluster if its center is within a prescribed cut-off distance $r_{\text{cut}}$ from the center of at least one other particle in the same cluster. The smallest range of attraction for the optimized potentials in this work is $\sim$ $1.1\sigma$ which makes it a natural choice for $r_{\text{cut}}$. For both the case of spherical clusters and strings, various cut-offs [$1.05\sigma - 1.25\sigma$] yield non-perceptible changes in the CSD and other structural properties. To distinguish between thick ramified structures and "thin" chains of one-monomer diameter width, the number of nearest neighbors of each particle is evaluated. Single strands of strings have predominantly two bonds per particle. Additionally, to characterize the anisotropy of the resulting objects, the fractal dimension $d_{f}$ is determined via $R_{g} \sim n^{1/d_{f}}$, where $n$ is the number of particles in a cluster and $R_g$ is the radius of gyration. The latter for a cluster of size $n$ is defined as \begin{equation} \label{eqn:Rg} R_g(n) = \dfrac{1}{n^{1/2}} {\Bigg \langle {\Big[ \sum\limits_{i=1}^n {(\textbf{r}_{i} - \textbf{R}_{\text{CM}})^{2}} \Big]}} \Bigg \rangle^{1/2} \end{equation} where $\textbf{r}_{i}$ and $\textbf{R}_{\text{CM}}$ are the coordinates of the $i^{\text{th}}$ particle and the center of mass of the cluster of $n$ particles, respectively. The average is performed over all clusters of size $n$. The fractal dimension is expected to be bounded by two limits: $\sim$3 for compact, homogeneously spherical aggregates to $\sim$1 for linear objects. For stringlike objects, we sometimes find that a single value for $d_f$ does not satisfactorily fit the data for every aggregate size $n$, in which case we segregate the data that visually appear to have different slopes in the $R_g$ versus $\text{log}(n)$ plot and compute distinct values for $d_f$ for the different regions of $n$. Some of the structures discussed below are found to be percolating. Percolation is defined if a cluster spans the length of the box in at least one direction such that, under periodic boundary conditions, the cluster wraps around the box and connects to itself. Accounting for periodicity, a percolating cluster is thus infinitely long and hence the corresponding $d_f$ or $R_g$ is not computed for percolated structures. If at least 50\% of the configurations conform to the said definition, the resulting morphology is deemed to be percolating.~\cite{BethLinker} \section{Inverse Design} \label{sec:IO} \subsection{Compact clusters} \label{subsec:IO_Clusters} Given the connection between clusters and stringlike structures described in the Introduction, it is instructive to use inverse design to discover pair potentials that favor these morphologies. As described in Sect.~\ref{sec:methods}, we first use RE optimization here to determine isotropic pair potentials that lead to self-assembly of compact $N_{\text{tgt}}$-mer clusters, namely, dimers, tetramers and octamers (target sizes of $N_{\text{tgt}}$ = 2,4,8) at a packing fraction of $\eta$ = 0.025, where the choice of $\eta$ is motivated by prior work.~\cite{RyanIC} In each case, the RE optimization discovers an interaction ${\beta}U_{{\text opt}}(r)$ capable of successfully assembling the target morphology, despite not quantitatively reproducing $g_{\text{tgt}}(r)$. For example, the region of depletion in $g(r)$ due to the repulsion between separate clusters is less pronounced in the equilibrium assembled structure relative to the target ensemble, and the distinct crystalline peaks for the target octamers are mimicked by a more muted profile in the equilibrium assembly~\cite{RyanIC}; a detailed comparison is provided in the Appendix (Fig. A1). Fig.~\ref{fgr:RE_cluster}a shows the optimized pair potentials ${\beta}U_{\text{opt}}(r)$ that form such compact clusters. The two main features of the optimized potentials are the attractive well beginning at $r=\sigma$ followed by a repulsive barrier. In accord with prior work,~\cite{RyanIC} the magnitude of the attraction increases and the peak of the repulsive barrier shifts to larger separations with increasing aggregate size. Using the designed pairwise interaction, strong clustering emerges. The corresponding CSDs in Fig.~\ref{fgr:RE_cluster}b demonstrate that the target cluster sizes are reproduced for all three cases of $N_{\text{tgt}}$ = 2, 4 and 8. There is mild polydispersity with respect to aggregation number, though most clusters are within one number of the targeted size. The self-assembled aggregates are well-separated and behave as an equilibrium cluster fluid. See a snapshot for $N_{\text{tgt}} = 8$ in Fig.~\ref{fgr:RE_cluster}c. The compactness of the largest clusters ($N_{\text{tgt}} = 8$) is quantified by the corresponding fractal dimension $d_{f}$, which is given by the inverse of the slope of Fig.~\ref{fgr:RE_cluster}(d) and is approximately 2.9. \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure1.pdf} \caption{(a) Optimized potentials ${\beta}U_{\text{opt}}(r)$ obtained through inverse design of compact $N_{\text{tgt}}$-mer clusters of particles at packing fraction $\eta = 0.025$. (b) Fraction of clusters $P(n)$ containing $n$ particles, using the above optimized potentials. (c) Simulation snapshot at equilibrium for the potential designed to assemble $N_{\text{tgt}} = 8$ compact clusters. (d) The average radius of gyration $R_g$ of spherical clusters of $N_{\text{tgt}} = 8$, with their corresponding error bars, as a function of cluster size $n$ on a log-log plot. The fractal dimension is the inverse of the slope ($R_{g} \sim n^{1/d_{f}}$) and is found to be approximately 2.9.} \label{fgr:RE_cluster} \end{figure} The $N_{\text{tgt}} = 2$ case of self-assembling dimers shares some complexities with the problem of string formation in that growth must be limited to a single direction. A pair of particles must associate attractively at near $r=\sigma$, but formation of triangles, where an incoming particle bonds between the dimer pair at the point of closest approach to both centers (i.e., $r=\sigma$), must be suppressed. Indeed, a dimer can be considered as both the smallest cluster and the smallest string. Looking forward to self-assembly of strings, we might anticipate that the characteristics of the potential optimized for forming dimers (a net-repulsive potential with a relatively narrow attractive well) are favorable for string formation more generally. \subsection{Strings} \label{subsec:IO_Strings} As described in Sect.~\ref{sec:methods}, the target ensemble of configurations for particle strings considered here comprises chains of particles of molecular weight $N_{\text{tgt}}=10$. In general, the isotropic pair potentials resulting from the corresponding RE optimizations ${\beta}U_{{\text opt}}(r)$ successfully self-assemble fluids of stringy particles. The radial distribution functions $g(r)$ of particles interacting via the optimized potentials capture the salient features of the target structure $g_{\text{tgt}}(r)$, though--as shown in the Appendix (Fig. A2)--the depleted region present between $r=\sigma$ and $r=2\sigma$ due to the absence of compact aggregates is less prominent in the former compared to the latter. Fig.~\ref{fgr:RE_string_potn} shows ${\beta}U_{{\text opt}}(r)$ at three different packing fractions $\eta$ = 0.05, 0.1 and 0.15. Analogous to the potential optimized for forming dimers in Sect.~\ref{subsec:IO_Clusters}, the attractive well is relatively narrow for all $\eta$. The repulsive barrier, on the other hand, is more sensitive to the packing fraction, increasing in range and magnitude as $\eta$ is reduced. For $\eta$ = 0.05, the pair potential is more complex due to the emergence of secondary features on the scale of a monomer diameter $\sigma$. The repulsive hump for $\eta$ = 0.05 is followed by three secondary attractive minima at $r$ = $2\sigma$, $3\sigma$ and $4\sigma$ respectively. This potential with alternating attractions and repulsions is qualitatively reminiscent of the "five-finger potential" proposed in Ref.~\citenum{Rechtsman} to form colloidal strings. These features are progressively muted as the packing fraction of the target structure and the optimization is increased. For $\eta$ = 0.1, there is a slight hint of a dip at $r = 2\sigma$ which is further reduced for $\eta$ = 0.15 where the repulsion terminates at $r = 2.25\sigma$. \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure2.pdf} \caption{(a) Optimized isotropic potentials ${\beta}U_{{\text opt}}(r)$ obtained by the inverse design of linear chains at three different packing fractions $\eta$. (b \& c) Simulation snapshot at equilibrium of strings formed using ${\beta}U_{{\text opt}}(r)$ at $\eta$ = 0.05 and 0.15, respectively.} \label{fgr:RE_string_potn} \end{figure} Using the optimized pair potentials, linear stringlike structures are observed to form at all three packing fractions. Fig.~\ref{fgr:RE_string_potn}(b) and (c) show representative simulation snapshots in equilibrium for $\eta$ = 0.05 and 0.15, where stringlike objects are visually apparent. More quantitatively, $R_g$ as a function of string size at three different packing fractions are reported in Figs.~\ref{fgr:RE_string_CSD}a,b. The corresponding fractal dimension for the chainlike structures at $\eta$ = 0.05 and 0.10 is approximately 1.1. The fractal analysis at $\eta$ = 0.15 shows two power laws, $d_f \sim 1.20$ for $n \leq 8$ and $1.80$ for $n > 8$, implying that the shorter chains are more linear while the longer strings are more curvilinear. Thus, the resulting optimized aggregates span from rod-like to chain-like, with the linearity of the chains increasing as $\eta$ decreases. Figs.~\ref{fgr:RE_string_potn}b,c highlight two illustrative examples where it is apparent that the selected chain at $\eta = 0.05$ is more linear than that at $\eta = 0.15$. The chains, at all three volume fractions, predominantly have two nearest neighbours $P(N_{nn})\approx2$; see Fig.~\ref{fgr:RE_string_CSD}b. It may be somewhat surprising that there is such a large percentage of particles ($\approx 20\%$) with three neighbors, particularly for $\eta=0.05\text{,}$ where the $d_f$ indicates that the objects are nearly linear. By considering the size $n$ and $N_{nn}$ for every aggregate, we discern that a fraction of aggregates are actually compact clusters. For example, compact tetramers are characterized by $N_{nn}=3$. Indeed, we find that when the compact clusters are removed from calculation of $P(N_{nn})$, the value of $P(N_{nn}=3)$ is significantly reduced; see Fig. A3 in the Appendix. The percentage of aggregates that are compact clusters at $\eta$ = 0.05, 0.10 and 0.15 are 28.1\%, 20.1\% and 6.7\%, respectively. Finally, unlike the previous case of compact clusters where distinct peaks are formed at the desired target cluster size from the optimized interactions, here we note no such size-specific assembly for the strings. The CSDs for the optimized potentials are shown in Fig.~\ref{fgr:RE_string_CSD}c, where polydispersity is obviously high and increases with packing fraction. Thus, while our optimization procedure illustrates that isotropic pair potentials can readily assemble monomer-wide stringlike particle structures, they are limited in their ability to control their length. \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure3.pdf} \caption{(a and b) Average radius of gyration of the clusters self-assembled from the pair potentials of Fig.~\ref{fgr:RE_string_potn} as a function of cluster size. Fractal dimension $d_f$, computed using $R_{g} \sim n^{1/d_{f}}$, is approximately 1.10 for $\eta$ = 0.05 and 0.10. For $\eta$ = 0.15, $d_f \sim$ 1.20 for strings of length $n \leq 8$ and 1.80 for $n > 8$. (c) Nearest neighbour distribution and (d) cluster size distribution of the aggregates obtained through the use of the optimized isotropic potentials of Fig.~\ref{fgr:RE_string_potn} at the specified volume fractions.} \label{fgr:RE_string_CSD} \end{figure} \subsection{Comparison of Optimized Interactions} \label{subsec:IO_HeatMap} Self-assembly is the result of an interplay of energetic and entropic contributions that determine which types of structures minimize the free energy of a given system. Despite this inherent complexity, we can gain insight into the propensity for a given potential to form either stringlike or compact objects by considering the energy for a test particle to approach a dimer, where ``end-attachment'' to the dimer gives rise to a short string and ``middle-attachment'' yields a compact triangle. A slice of the potential energy landscape seen by the test particle relative to an ideal dimer is shown in Fig.~\ref{fgr:heatmap} as a heat map for two illustrative cases: (a) the potential optimized for compact tetramers at $\eta$ = 0.025 and (b) the potential optimized for strings at $\eta$ = 0.15. The corresponding heat maps for the remaining cluster and string cases are demonstrated in Figs. A4 and A5 in the Appendix. For the compact cluster-forming potential, the energy is lowest when the test particle bonds to both particles of the dimer simultaneously due to the deep attractive well at $r=\sigma$, promoting middle-attachment. Because the potential has been optimized to form compact tetramers, the repulsive corona surrounding the pair of particles in Fig.~\ref{fgr:heatmap}a penalizes the middle- or end-attachment of additional particles necessary to form larger compact clusters or strings. By contrast, when the potential optimized for string formation is used, end-attachment to the dimer is more energetically favorable than middle-attachment, fostering chain growth. End-attachment is favored for this particular interaction because the energy of the attractive well at $r=\sigma$ is greater than at $r=2\sigma$, in part due to the relatively narrow repulsive barrier. Unlike for clusters, the chain-forming potential is net repulsive so that the lowest energy position for the test particle is to not attach to the dimer at all. However, in a bulk system, sufficiently high pressures induce particle association. Note that the preceding preference for end-attachment over middle-attachment will continue to be present as particles are added to the chain; that is, there is no energetic mechanism for controlling the length of the chain inherent to the potential itself. While the above analysis is limited in that potentially important effects (e.g., the impact of surrounding aggregates on the energetics, the role of entropy, etc.) are omitted, this simplified model lends insights into how the lengthscales of the attractive well and the repulsive barrier might influence self-assembly. In particular, the position of the repulsive barrier controls the size of the compact clusters, in keeping with prior work.~\cite{RyanIC} Furthermore, the relatively narrow repulsive barrier in the string-forming potential promotes end attachment. \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure4.pdf} \caption{Two-dimensional potential energy landscape around a dimer as viewed by a test particle using the optimized potential for (a) $N_{\text{tgt}} = 4$ compact clusters at $\eta$ = 0.025 and (b) chainlike clusters at $\eta = 0.15$. The $X$ and $Y$ axes (in units of $\sigma$) denote $x$ and $y$ coordinates of the test particle while the color bar is in units of $k_{B}T$.} \label{fgr:heatmap} \end{figure} \section{Universal Potential} \label{sec:UP} \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure5.pdf} \caption{(solid) Optimized potential for strings at $\eta = 0.05$. (dotted) The fit of Eqn. 6-7 to the preceding potential after the potential has been truncated at $r = 2\sigma$ and shifted so that $U(r = 2\sigma)=0$. The optimal parameters are $\beta \varepsilon_{R} = 6.76$, $\delta_{A}=0.5$, and $\delta_{R}=0.5$} \label{fgr:cut_potn} \end{figure} Comparing the pair potentials optimized for compact clusters to those optimized for chains, the common features are an attractive well at short separations and an outer repulsive barrier with a shorter range than that characteristic of the Yukawa potential routinely used in SALR potentials. As noted in Sect.~\ref{subsec:IO_HeatMap}, the location of the repulsive barrier (which is controlled by the widths of the attractive well and the repulsive barrier) is one of the key parameters in promoting either end or middle attachment. Motivated by these basic features shown in Figs.~\ref{fgr:RE_cluster} and~\ref{fgr:RE_string_potn}, we propose a simple and tunable pair potential form that favors various equilibrium structures, from disordered monomeric fluid to thin particle strings to compact particle clusters. Specifically, we consider a sum of a steep WCA repulsion at short distance followed by two half harmonic potentials that mimic the attractive well and the repulsive barrier. \begin{equation} \label{eqn:Potn} U(r) = \Phi_{\text{wca}}(r)+\Phi_{\text{hp}_{1}}(r)+\Phi_{\text{hp}_{2}}(r) \end{equation} The three sub-potentials are defined as \begin{equation} \label{eqn:WCA1} \begin{array}{ll} \Phi_{\text{wca}}(r) = \left\{ \begin{array}{ll} 4 {\varepsilon}_{\text{wca}}\bigg[ \bigg(\dfrac{\sigma}{r} \bigg)^{2\alpha} - \bigg( \dfrac{\sigma}{r} \bigg)^{\alpha} \bigg] + {\varepsilon}_{\text{wca}} & r \leq \sigma \\ [15pt] 0 & r > \sigma \end{array} \right. \end{array} \end{equation} \begin{equation} \label{eqn:harmonic1} \begin{array}{ll} &\Phi_{\text{hp}_{1}}(r) = \left\{ \begin{array}{ll} 0 & r < \sigma \\ [15pt] \varepsilon_{\text{R}} \bigg[ 1 - \bigg(\dfrac{r - w_{1}}{\delta_{A}}\bigg)^2 \bigg] & \sigma \leq r \leq w_{1} \\ [15pt] 0 & r > w_{1} \\ [15pt] \end{array} \right. \end{array} \end{equation} \begin{equation} \label{eqn:harmonic2} \begin{array}{ll} \Phi_{\text{hp}_{2}}(r) = \left\{ \begin{array}{ll} 0 & r \leq w_{1} \\ [15pt] \varepsilon_{\text{R}} \bigg[ 1 - \bigg(\dfrac{r - w_{1}}{\delta_{R}}\bigg)^2 \bigg] & w_{1} \leq r \leq w_{2} \\ [15pt] 0 & r \geq w_{2} \\ [15pt] \end{array} \right. \end{array} \end{equation} where $\beta \varepsilon_{wca} = 1.5$, $\alpha = 12$, $w_{1} = \sigma + \delta_{A}$, and $w_{2} = \sigma + \delta_{A} + \delta_{R}$. The remaining adjustable parameters ($\beta \varepsilon_{R}, \delta_{A}, \delta_{R}$) are determined by fitting the above form to the optimized potential that promotes self-assembly of strings at $\eta = 0.05$. This particular potential is used as the reference because strings are generally more challenging to self-assemble than compact clusters from an isotropic potential, and $\eta = 0.05$ is the packing fraction at which the designed potential resulted in strings with the smallest fractal dimension. To simplify the reference, the optimized potential is truncated beyond $r = 2\sigma$ and vertically shifted so that $U(r = 2\sigma)=0$. As shown in Fig.~\ref{fgr:cut_potn}, the resulting fit approximates the short-ranged ($r \leq 2\sigma$) reference potential well with $\beta \varepsilon_{R} = 6.76$, $\delta_{A}=0.5$, and $\delta_{R}=0.5$. To avoid discontinuities in the force profile, the above potential is weakly smoothed using a successive two-point averaging scheme where, beyond $r = \sigma$, $U(r_i)$ is twice replaced by an average of $U(r_{i-1})$ and $U(r_{i+1})$. \subsection{Description of Morphologies} \label{subsec:morphologies} In this Section, we explore the effects of tuning the ranges of attraction ($\delta_{A}$) and repulsion ($\delta_{R}$) in the model pair potential introduced in Sect.~\ref{sec:UP}, while holding $\beta \varepsilon_{R}=6.76$ constant at a packing fraction of $\eta = 0.15$ (i.e., the lowest value of $\eta$ for which we observed the percolated string morphologies described below). Based on the discussion of Sect.~\ref{subsec:IO_HeatMap}, we expect that modifying $\delta_{A}$ and $\delta_{R}$ will bias the potential toward favoring assembly of either strings or compact clusters. Fig.~\ref{fgr:Snapshots}a shows four possible potentials where $\delta_{A} = 0.2$ and $\delta_{R} = 0.5, 0.7, 1.0, 2.0$. For the family of potentials where only $\delta_{R}$ is varied while $\delta_{A} = 0.2$ is constant, we observe four broad classes of structures in molecular dynamics simulations, shown in Fig.~\ref{fgr:Snapshots}b-e and with corresponding CSDs and nearest neighbour distributions shown in Figs.~\ref{fgr:CSD}a and b, respectively. \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure6.pdf} \caption{(a) Proposed model potential given by Eq.~\ref{eqn:Potn} for $\delta_{R}=0.5, 0.7, 1.0, 2.0$ at fixed $\delta_A=0.2$ and $\beta\varepsilon_{R} = 6.76$. Snapshots of assembled structures obtained via molecular dynamics simulations using the potentials shown in panel a, including (b) short strings ($\delta_{R} = 0.5$) (c) percolated chains ($\delta_{R} = 0.7$) (d) crystalline clusters ($\delta_{R} = 1.0$) and (e) Bernal spirals ($\delta_{R} = 2.0$).} \label{fgr:Snapshots} \end{figure} \begin{enumerate \item \textbf{Monomers (M)} A fluid of particles that remain in a dispersed state results when $\delta_{A} = 0.2$ and $\delta_{R} = 0.3$. Quantitatively, a state is defined to be monomeric if the fraction of monomers (cluster size $n = 1$) in the CSD exceeds 50\%. The CSD for this state point (Fig.~\ref{fgr:CSD}a) shows 22\% of the aggregates are dimers, while 65\% are monomers. Correspondingly, the nearest neighbour histogram (Fig.~\ref{fgr:CSD}b) shows that a significant majority of the particles have zero or one nearest neighbor. \item \textbf{Strings (S and SP)} Single-stranded stringy particle assemblies are obtained at values for $\{\delta_{A}$, $\delta_{R}\}$ of $\{0.2, 0.5\}$ and $\{0.2, 0.7\}$. We distinguish between shorter stringlike objects (S) and percolated networks of strings (SP) on the basis of the percolation analysis described in Sect.~\ref{subsec:StructuralAnalysis}. Snapshots of short strings ($\delta_{R}=0.5$) and percolated chains ($\delta_{R}=0.7$) are depicted in Figs.~\ref{fgr:Snapshots}b and c respectively. Stringlike structures, regardless of whether percolated, have predominantly two bonds per particle. The primary difference upon transitioning from shorter strings to a percolated porous network is the growth in the number of junctions, imparting greater connectivity to the branches. Thus, the corresponding nearest neighbour distribution for the percolated network shows a higher value for $N_{nn} = 3$ as compared to that of strings; see Fig.~\ref{fgr:CSD}b. For short strings, we confirm that the aggregates are elongated on the basis of visual inspection and the $d_f$. As observed for the pair potential optimized for strings at $\eta=0.15$ with $\{\delta_{A}$, $\delta_{R}\}$ values of $\{0.2, 0.5\}$, we observed two distinct regimes of $d_f$ as a function of aggregate size $n$. For $n \leq 11$, $d_f=1.35$, and for larger aggregates $d_f=1.9$. More generally, we identify structures as stringlike on the basis of both a peak at $P(N_{nn})=2$ and a $d_f$ in the range of $1-1.5$ for small ($n \lesssim 10$) objects and a $d_f$ around $1.9-2.0$ for the longer chains. Compared to the results presented in Sect.~\ref{subsec:IO_Strings}, there are markedly fewer compact aggregates in coexistence with the strings: 1\% of aggregates for the short strings and 0.5\% of aggregates for the percolated strings are compact. However, in keeping with the inversely designed potentials, there is no apparent size-specificity for the strings. The CSDs in Fig.~\ref{fgr:CSD}a show a monotonic trend of decreasing probability of larger aggregates for the unpercolated structures. To the best of our knowledge, this is the first demonstration of the self-assembly and stabilization of spherical particles into a three dimensional open, porous network of \textit{single-stranded} chains via an isotropic potential at nonzero temperature. \item \textbf{Clusters (C)} Well-defined compact clusters are observed (Fig.~\ref{fgr:Snapshots}d) using the proposed potential at $\{\delta_{A}$, $\delta_{R}\}=\{0.2, 1.0\}$. Clusters are identified by a sharp maximum in the CSD at a cluster size $n > 1$ and by a fractal dimension of $\lesssim$ 3 owing to their compact and isotropic shape. At this state point, the clusters are tetramers; see the prominent peaks at $P(n)=4$ and $P(N_{nn})=3$ in Fig.~\ref{fgr:CSD}a,b. The corresponding $d_f$ is 2.5. Instead of a fluid of clusters as studied in Sect.~\ref{subsec:IO_Clusters}, the self-assembled clusters crystallize under these conditions onto a lattice. \item \textbf{Cylindrical Spirals (CS)} Elongated, cylindrical spirals of colloids are observed at $\{\delta_{A}$, $\delta_{R}\}=\{0.2, 2.0\}$. These are multi-stranded, percolated networks of anisotropic aggregates. A special case of these structures with three helical chains is commonly referred to as Bernal spirals ~\cite{Bartlett, Wales, OneD, Bolhuis} which have six nearest neighbours, and a representative snapshot is shown in Fig.~\ref{fgr:Snapshots}e. Accordingly, Bernal spirals are identified as percolated structures with a peak in $P(N_{nn})=6$. \end{enumerate} \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure7.pdf} \caption{Using the model potential of Eqn.~\ref{eqn:Potn} at $\eta = 0.15$, $\delta_A = 0.2$, and different values of $\delta_R$, simulated (a) fraction of clusters containing $n$ particles, $P(n)$, and (b) distribution of the average number of nearest neighbours of each particle.} \label{fgr:CSD} \end{figure} \begin{figure} \includegraphics[width=3.37in,keepaspectratio]{Figure8.pdf} \caption{Morphological phase diagram for the model potential of Eqn.~\ref{eqn:Potn} as a function of $\delta_{A}$ and $\delta_{R}$ at $\eta = 0.15$ and $\varepsilon_{R} = 6.76$. The structures observed include monomers M (square), short strings S (diamonds), percolated strings SP (circles), crystalline clusters (triangles), and Bernal Spirals CS (stars) discussed in the text. The shaded region depicts phase space where end-to-end joining of monomers--needed for "thin" strings--is favored over more compact aggregate packings.} \label{fgr:PhaseDiagram} \end{figure} \subsection{Phase Diagram} \label{subsec:phase_diagram} In Fig.~\ref{fgr:PhaseDiagram}, we further explore how the morphologies identified above emerge in this model as a function of $\delta_R$ and $\delta_A$. For the conditions studied, note that the quantity $\delta_{A} + \delta_{R}$ appears to be the primary determinant of which self-assembled structures are observed. When $\delta_{A} + \delta_{R} \lesssim 0.5$, the particles form a fluid of well-dispersed monomers (M). For progressively larger $\delta_{A} + \delta_{R}$, short single-stranded strings (S) of particles form followed by interconnected percolating networks of strings (SP). For sufficiently low attractive ranges ($\delta_A \leq 0.2$), the physical bonds are labile and percolated strings are fluidic, continually breaking and reforming flexible uniaxial structures. However, for $\delta_A > \delta_R$, the increased attraction impedes dissociation of the particles especially at the junctions. As a result, though the chains still fluctuate in local order, larger length-scale motions are suppressed. When $\delta_R + \delta_A \gtrsim 1$, the space-spanning stringy particle network morphs into a crystalline arrangement of compact particle clusters (C). Consistent with Fig.~\ref{fgr:RE_cluster}a, the clusters grow with increasing $\delta_A$. For $\delta_A \lesssim 0.6$, the clusters are tetramers, but for larger values of $\delta_A$, aggregation numbers of $5-7$ are observed. The corresponding $d_f$ values range from $2.5-2.6$. Increasing $\delta_R + \delta_A$ further compels the spherical clusters to coalesce, eventually resulting in kinetically arrested percolated networks of cylindrical structures (CS), of which the Bernal Spiral is a special case indicated in Fig.~\ref{fgr:PhaseDiagram}. As detailed above, the interplay between the length scales of the attraction and repulsion in this model pair potential results in a rich variety of self-assembled structures. The analysis in Sect.~\ref{subsec:IO_HeatMap} suggested that comparing the energy for a test particle to bond to either end of a dimer versus the mid-point is a helpful, though simplistic, predictor of whether the clusters formed by a given potential will be stringy or compact, respectively. Here, we also find that this analysis helps to understand why $\delta_R + \delta_A$ is an important parameter in determining the observed morphologies. Specifically, we compare the energetics of end-attachment ($U_{\text{end}} \equiv U(\text{r}=\sigma) + U(\text{r}=2\sigma)$) and middle-attachment ($U_{\text{mid}} \equiv U(\text{r}=\sigma) + U(\text{r}=\sigma)$) of a test particle to an isolated dimer. Regions of phase space where end-attachment is preferred ($U_{\text{end}} \leq U_{\text{mid}}$) are shaded in Fig.~\ref{fgr:PhaseDiagram}. Note that when $\delta_{A}+\delta_{R} \lesssim 1.0$, end-attachment to a dimer is preferred; when $\delta_{A}+\delta_{R}$ is larger, middle-attachment is favored. Interestingly, $\delta_{A}+\delta_{R} \approx 1.0$ also approximately corresponds to the crossover between percolated string networks and compact clusters observed in the simulations. Thus, though based on a simplistic energetic analysis, we can gain insights into how the length scales of the short-range attractions and longer-range repulsions in a pair potential can favor the formation of stringlike versus compact self-assembled structures. \section{Conclusions} We used an inverse design strategy based on RE optimization to determine and study isotropic potentials capable of driving one-component systems of particles to self-assemble into compact versus linear stringlike clusters. Simulations using particles interacting via the optimized potentials demonstrated spontaneous formation of the targeted morphologies, though successful design of specific aggregation numbers was only achievable for compact clusters, a limitation that might be expected for isotropic potentials. The simplicity of the optimized potentials for these structures is remarkable given that prior computational efforts to arrive at stringlike clusters in three dimensions have employed directional bonding or anisotropy of the colloidal building blocks to control assembly. Motivated by the RE optimized potentials, a universal potential with a simple functional form is proposed that is capable of assembling a rich variety of complex architectures: monomeric fluid, fluid of short chain-like structures, percolated networks of strings, crystalline assemblies of compact clusters, and percolated thick cylindrical structures including Bernal spirals. The proposed model potential is a combination of short-range attraction at contact, which can be realized by polymer-mediated depletion in chemistry-matched systems (for smaller values of $\delta_{A}$) and a medium-ranged repulsive barrier which approximately mimics that of suspensions of non-charged brush-grafted nanoparticles.~\cite{Denton} Polymer depletants that are responsive to external stimuli (e.g., pH,~\cite{pH_polymer} temperature,~\cite{thermo_polymer} light~\cite{photo_polymer} and other fields~\cite{field_polymer}) represent another interesting avenue to tune $\delta_{A}$ to switch between different morphologies. More generally, these results provide qualitative insights into the rich morphological phase diagrams that can potentially be realized in colloidal systems with (approximately) isotropic interactions with competitive repulsive and attractive components. \label{sec:conclusions} \section*{Acknowledgments} The authors thank Sanket Kandulkar and Michael P. Howard for valuable discussions and feedback. This research was primarily supported by the National Science Foundation through the Center for Dynamics and Control of Materials: an NSF MRSEC under Cooperative Agreement No. DMR-1720595 as well as the Welch Foundation (F-1696). We acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources.
1901.04774
\section{Introduction} A group $G$ is said to have restricted centralizers if for each $g$ in $G$ the centralizer $C_G(g)$ either is finite or has finite index in $G$. This notion was introduced by Shalev in \cite{shalev} where he showed that a profinite group with restricted centralizers is finite-by-abelian-by-finite. Note that a finite-by-abelian profinite group is necessarily abelian-by-finite so Shalev's theorem essentially states that a profinite group with restricted centralizers is abelian-by-finite. In the present article we handle profinite groups with restricted centralizers of word-values. Given a word $w$ and a group $G$, we denote by $G_w$ the set of all values of $w$ in $G$ and by $w(G)$ the subgroup generated by $G_w$. In the case where $G$ is a profinite group $w(G)$ denotes the subgroup topologically generated by $G_w$. Recall that multilinear commutator words are words which are obtained by nesting commutators, but using always different variables. Such words are also known under the name of outer commutator words and are precisely the words that can be written in the form of multilinear Lie monomials. The main purpose of this paper is to prove the following theorem. \begin{theorem}\label{main} Let $w$ be a multilinear commutator word and $G$ a profinite group in which all centralizers of $w$-values are either finite or open. Then $w(G)$ is abelian-by-finite. \end{theorem} From the above theorem we can deduce the following results. \begin{corollary}\label{openT} Under the hypothesis of Theorem \ref{main}, the group $G$ has an open subgroup $T$ such that $w(T)$ is abelian. In particular $G$ is soluble-by-finite. \end{corollary} \begin{corollary}\label{profinite-finite} Let $w$ be a multilinear commutator word and $G$ a profinite group in which every nontrivial $w$-value has finite centralizer. Then either $w(G)=1$ or $G$ is finite. \end{corollary} The proof of Theorem \ref{main} is fairly complicated. We will now briefly describe some of the tools employed in the proof. Recall that a group $G$ is an FC-group if the centralizer $C_G(g)$ has finite index in $G$ for each $g\in G$. Equivalently, $G$ is an FC-group if each conjugacy class $g^G$ is finite. A group $G$ is a BFC-group if all conjugacy classes in $G$ are finite and have bounded size. A famous theorem of B. H. Neumann says that the commutator subgroup of a BFC-group is finite \cite{bhn}. Shalev used this to show that a profinite FC-group has finite commutator subgroup \cite{shalev}. In Section \ref{sec:FC} we generalize Shalev's result by showing that if $w$ is a multilinear commutator word and $G$ is a profinite group in which all $w$-values are FC-elements, then $w(G)$ has finite commutator subgroup. In fact, we establish a much stronger result involving the marginal subgroup introduced by P. Hall (see Section \ref{sec:FC} for details). The results of Section \ref{sec:FC} enable us to reduce Theorem \ref{main} to the case where all $w$-values have finite order. A famous result by Zelmanov says that periodic profinite groups are locally finite \cite{z:periodic}. Recall that a group is said to locally have some property if all its finitely generated subgroups have that property. There is a conjecture stating that for any word $w$ and any profinite group $G$ in which all $w$-values have finite order, the verbal subgroup $w(G)$ is locally finite. The conjecture is known to be correct in a number of particular cases (see \cite{Sh01,KS, DMS-2015}). In Section \ref{sec:pro-p} we obtain another result in this direction. Namely, let $p$ be a prime, $w$ a multilinear commutator word and $G$ a profinite group in which all $w$-values have finite $p$-power order. We prove that the abstract subgroup generated by all $w$-values is locally finite. The proof of the above result relies on the techniques created by Zelmanov in his solution of the Restricted Burnside Problem \cite{Z}. While the result falls short of proving that $w(G)$ is locally finite, it will be shown to be sufficient for the purposes of the present paper. Indeed, in Section \ref{sec:locfin} we prove that if a profinite group $G$ satisfies the hypotheses of Theorem \ref{main} and has all $w$-values of finite order, then $w(G)$ is locally finite. This is achieved by combining results of previous sections with the ones obtained in \cite{KS} and \cite{DMS-2015}. In Section \ref{sec:final} we finalize the proof of Theorem \ref{main}. At this stage without loss of generality we can assume that $w(G)$ is locally finite and at least one $w$-value has finite centralizer. With these assumptions, a whole range of tools (in particular, those using the classification of finite simple groups) become available. We appeal to Wilson's theorem on the structure of compact torsion groups which implies that in our situation $w(G)$ has a finite series of closed characteristic subgroups in which each factor either is a pro-$p$ group for some prime $p$ or is isomorphic (as a topological group) to a Cartesian product of finite simple groups. Recall that the famous Ore's conjecture, stating that every element of a nonabelian finite simple group is a commutator, was proved in \cite{lost}. It follows that for each multilinear commutator word $w$ every element of a nonabelian finite simple group is a $w$-value. If a group $K$ is isomorphic to a Cartesian product of nonabelian finite simple groups and has restricted centralizers of $w$-values, then actually all centralizers of elements in $K$ are either finite or of finite index and so, by Shalev's theorem \cite{shalev}, $K$ is finite. We use this observation to conclude that under our assumptions the verbal subgroup is (locally soluble)-by-finite. Finally, an application of the results on FC-groups obtained in Section \ref{sec:FC} completes the proof of Theorem \ref{main}. The next section contains a collection of mostly well-known auxiliary lemmas which are used throughout the paper. In Section \ref{sec:comb} we describe combinatorial techniques developed in \cite{GuMa, DMS-2015,DMS-revised} for handling multilinear commutator words. We also prove some new lemmas which are necessary for the purposes of the present article. Throughout the paper, unless explicitly stated otherwise, subgroups of profinite groups are assumed closed. \section{Auxiliary lemmas} Multilinear commutator words are words which are obtained by nesting commutators, but using always different variables. More formally, the word $w(x) = x$ in one variable is a multilinear commutator; if $u$ and $v$ are multilinear commutators involving different variables then the word $w=[u,v]$ is a multilinear commutator, and all multilinear commutators are obtained in this way. An important family of multilinear commutator words is formed by so-called derived words $\delta_k$, on $2^k$ variables, defined recursively by $$\delta_0=x_1,\qquad \delta_k=[\delta_{k-1}(x_1,\ldots,x_{2^{k-1}}),\delta_{k-1}(x_{2^{k-1}+1},\ldots,x_{2^k})].$$ Of course $\delta_k(G)=G^{(k)}$ is the $k$-th term of the derived series of $G$. We recall the following well-known result (see for example \cite[Lemma 4.1]{Sh00}). \begin{lemma}\label{lem:delta_k} Let $G$ be a group and let $w$ be a multilinear commutator word on $n$ variables. Then each $\delta_n$-value is a $w$-value. \end{lemma} The following is Lemma 4.2 in \cite{Sh00} \begin{lemma}\label{lem:4.2} Let $w$ be a multilinear commutator word and $G$ a soluble group in which all $w$-values have finite order. Then the verbal subgroup $w(G)$ is locally finite. \end{lemma} If $x$ is an element of a group $G$, we write $x^G$ for the conjugacy class of $x$ in $G$. More generally, if $S$ is a subset of $G$, we write $S^G$ for the set of conjugates of elements of $S$. On the other hand, if $K$ is a subgroup of $G$, then $K^G$ denotes the normal closure of $K$ in $G$, that is, the subgroup generated by all conjugates of $K$ in $G$, with the usual convention that if $G$ is a topological group then $K^G$ is a closed subgroup. Recall that if $G$ is a group, $a\in G$ and $H$ is a subgroup of $G$, then $[H,a]$ denotes the subgroup of $G$ generated by all commutators of the form $[h,a]$, where $h\in H$. It is well-known that $[H, a]$ is normalized by $a$ and $H$. We will denote by $\Delta (G)} %{\sc{FC}(G) $ the set of FC-elements of $G$, i.e. $$\Delta (G)} %{\sc{FC}(G) =\{ x\in G \mid |x^G| < \infty\}.$$ Obviously $\Delta (G)} %{\sc{FC}(G)$ is a normal subgroup of $G$. Note that if $G$ is a profinite group, $\Delta (G)} %{\sc{FC}(G)$ needs not be closed. \begin{lemma} \label{2.3b} Let $G$ be a group. For every $x\in \Delta (G)} %{\sc{FC}(G)$ the subgroup $[\Delta (G)} %{\sc{FC}(G),x]^G$ is finite. \end{lemma} \begin{proof} Let $\Delta=\Delta(G)$. Note that $\Delta'$ is locally finite (see \cite[Section 14.5]{rob}). The subgroup $[\Delta,x]$ is generated by finitely many commutators $[y,x]$ where $y \in \Delta$. Hence $[\Delta,x]$ is finite. Further, each commutator $[y,x]$ is an FC-element and so $C_G([\Delta,x])$ has finite index in $G$. Consequently, $[\Delta,x]^G$ is a product of finitely many conjugates of $[\Delta,x]$. The conjugates of $[\Delta,x]$ normalize each other so $[\Delta,x]^G$ is finite. \end{proof} \begin{lemma}\label{114} Let $G$ be a locally nilpotent group containing an element with finite centralizer. Suppose that $G$ is residually finite. Then $G$ is finite. \end{lemma} \begin{proof} Choose $x\in G$ such that $C_G(x)$ is finite. Let $N$ be a normal subgroup of finite index such that $N\cap C_G(x)=1$. Assume that $N\neq1$ and let $1\neq y\in N$. The subgroup $\langle x,y\rangle$ is nilpotent and so the center of $\langle x,y\rangle$ has nontrivial intersection with $N$. This is a contradiction since $N\cap C_G(x)=1$. \end{proof} Lemma 1.6.1 in \cite{Kh} states that if $G$ is a finite group, $N$ is a normal subgroup of $G$ and $x$ an element of $G$, then $|C_{G/N}(xN)| \le |C_G(x)|$. We will need a version of this lemma for locally finite groups. \begin{lemma}\label{KK} Let $G$ be a locally finite group and $x$ an element of $G$ such that $C_G(x)$ is finite of order $m$. If $N$ is a normal subgroup of $G$, then $|C_{G/N}(xN)| \le m$. \end{lemma} \begin{proof} Arguing by contradiction, assume that $C_{G/N}(xN)$ contains $m+1$ pairwise distinct elements $b_1N, \dots , b_{m+1}N$. Let $K=\langle x, b_1, \dots , b_{m+1} \rangle$ and $N_0=N \cap K$. Note that $K$ is a finite group and $C_{K/N_0} (x N_0)$ contains the $m+1$ distinct elements $b_1N_0, \dots , b_{m+1}N_0$. This contradicts Lemma 1.6.1 in \cite{Kh}. \end{proof} \begin{lemma}\label{sol1} Let $d,r,s$ be positive integers. Let $G$ be a soluble group of derived length $d$ generated by a set $X$ such that every element in $X$ has finite order dividing $r$ and has at most $s$ conjugates in $G$. Then $G$ has finite exponent bounded by a function of $d,r,s$. \end{lemma} \begin{proof} The proof is by induction on the derived length of $G$. If $G$ is abelian then $G$ has exponent dividing $r$. Note that $G'$ is generated by all conjugates of the set $\{[y,z]|y,z\in X\}$. As $y,z\in X$ have at most $s$ conjugates in $G$ it follows that $[y,z]$ has at most $s^2$ conjugates. Note that the center of $\langle y,z\rangle$ coincides with $C_{\langle y,z\rangle}(y)\cap C_{\langle y,z\rangle}(z)$ so it has index at most $s^2$, thus the order of the derived subgroup of $\langle y,z\rangle$ is bounded by a function of $s$ by Schur's theorem \cite[10.1.4]{rob}. By induction, the exponent of $G'$ is finite and bounded by a function of $d,r,s$. As $G/G'$ has exponent at most $r$, the result follows. \end{proof} Throughout the paper, we will use without explicit references the following result. \begin{lemma}\label{abelian-by-finite} Let $G$ be a finite-by-abelian profinite group. Then $G$ is central-by-finite. \end{lemma} \begin{proof} Let $T$ be a finite normal subgroup of $G$ such that $G/T$ is abelian and let $N$ be an open normal subgroup of $G$ such that $N\cap T=1$. Then $N\cap G'=1$ and so $N$ is central in $G$. \end{proof} \section{Combinatorics of commutators}\label{sec:comb} We will need some machinery concerning combinatorics of commutators, so we now recall some notation from the paper \cite{DMS-revised}. Throughout this section, $w=w(x_1,\dots,x_n)$ will be a fixed multilinear commutator word. If $A_1,\dots,A_n$ are subsets of a group $G$, we write $$\mathcal{X}_w(A_1,\dots,A_n)$$ to denote the set of all $w$-values $w(a_1,\dots,a_n)$ with $a_i\in A_i$. Moreover, we write $w(A_1, \dots , A_n)$ for the subgroup $\langle\mathcal{X}_w(A_1,\dots,A_n)\rangle$. Note that if every $A_i$ is a normal subgroup of $G$, then $w(A_1, \dots , A_n)$ is normal in $G$. Let $I$ be a subset of $\{1,\dots,n\}$. Suppose that we have a family $A_{i_1}, \dots , A_{i_s}$ of subsets of $G$ with indices running over $I$ and another family $B_{l_1}, \dots , B_{l_t}$ of subsets with indices running over $\{1, \dots ,n \} \setminus I.$ We write $$w_I(A_i ; B_l)$$ for $w(X_1, \dots , X_n)$, where $X_k=A_k$ if $k \in I$, and $X_k=B_k$ otherwise. On the other hand, whenever $a_i\in A_i$ for $i\in I$ and $b_l\in B_l$ for $l\in \{1,\dots,n\}\setminus I$, the symbol $w_I(a_i;b_l)$ stands for the element $w(x_1, \dots , x_n)$, where $x_k=a_k$ if $k \in I$, and $x_k=b_k$ otherwise. The following lemmas are Lemma 2.4, Lemma 2.5 and Lemma 4.1 in \cite{DMS-revised}. \begin{lemma}\label{2.1-conjugates} Let $w=w(x_1, \dots, x_n)$ be a multilinear commutator word. Assume that $H$ is a normal subgroup of a group $G$. Let $ g_1, \dots , g_n \in G$, $h \in H$ and fix $s \in \{1, \dots, n\}$. Then there exist $y_j \in g_j^H$, for $j=1,\dots, n$, such that \begin{eqnarray*} w_{\{s\}}(g_sh; g_l)=w(y_1, \dots,y_n) w_{\{s\}}(h; g_l). \end{eqnarray*} \end{lemma} \begin{lemma}\label{2.2-bis} Let $G$ be a group and let $w=w(x_1, \dots, x_n)$ be a multilinear commutator word. Assume that $M, A_1,\dots,A_n$ are normal subgroups of $G$ such that for some elements $a_i\in A_i$, the equality $$w(a_1(A_1\cap M), \dots , a_n(A_n\cap M))=1$$ holds. Then for any subset $I$ of $\{1,\dots,n\}$ we have $$w_I(A_i\cap M ; a_l (A_l\cap M))=1.$$ \end{lemma} \begin{lemma}\label{M2} Let $G$ be a group and let $w=w(x_1, \dots, x_n)$ be a multilinear commutator word. Let $A_1,\dots,A_n$ and $M$ be normal subgroups of $G$. Let $I$ be a subset of $\{1, \dots ,n \}$. Assume that \[ w_J (A_i; A_l\cap M)=1\] for every proper subset $J$ of $I$. Suppose we are given elements $g_i \in A_i$ for $i \in I$ and elements $h_k \in A_k\cap M$ for $k \in \{1, \dots, n\}$. Then we have \[w_I(g_ih_i; h_l)=w_I(g_i;h_l).\] \end{lemma} \begin{lemma}\label{comb1} Let $w=w(x_1, \dots, x_n)$ be a multilinear commutator word. Assume that $T$ is a normal subgroup of a group $G$ and $a_1, \dots , a_n$ are elements of $G$ such that every element in $\mathcal{X}_w(a_1T,\dots,a_nT)$ has at most $m$ conjugates in $G$. Then every element in $T_w$ has at most $m^{2^n}$ conjugates in $G$. \end{lemma} \begin{proof} We will first prove the following statement: \smallskip {\noindent ($*$) Assume that for some $ g_1, \dots , g_n \in G$ every element in the set $\mathcal{X}_w(g_1T,\dots,g_nT)$ has at most $t$ conjugates in $G$, and let $s \in \{1, \dots, n\}$. Then every element of the form $w_{\{s\}}(h_s; g_lh_l)$, where $h_1,\dots,h_n\in T$, has at most $t^2$ conjugates.} \vskip8pt Choose an element $z=w_{\{s\}}(h_s; g_lh_l)$ as above. By Lemma \ref{2.1-conjugates} \begin{eqnarray*} w_{\{s\}}(g_sh_s; g_lh_l)=w(y_1, \dots,y_n) w_{\{s\}}(h_s; g_lh_l), \end{eqnarray*} where $y_j \in (g_jh_j)^T\subseteq g_jT$, for $j=1,\dots, n$. As both $w_{\{s\}}(g_sh_s; g_lh_l)$ and $w(y_1, \dots,y_n)$ lie in $\mathcal{X}_w(g_1T,\dots,g_nT)$, they have at most $t$ conjugates in $G$. Thus $$z=w(y_1, \dots,y_n)^{-1}w_{\{s\}}(g_sh_s; g_lh_l)$$ has at most $t^2$ conjugates in $G$. This proves ($*$). We will now prove that every element in $$\mathcal{X}_w(T,\dots,T,a_{i}T,\dots,a_nT)$$ has at most $m^{2^i}$ conjugates, by induction on $i$. The lemma will follow by taking $i=n$. If $i=1$ the statement is true by the hypotheses. So assume that $i\ge 2$ and every element in $\mathcal{X}_w(T,\dots,T,a_{i-1}T,\dots,a_nT)$ has at most $m^{2^{i-1}}$ conjugates. By applying ($*$) with $g_1=\dots=g_{i-1}=1$, $t=2^{i-1}$ and $s=i$ we get the result. \end{proof} \begin{lemma}\label{comb2} Let $w=w(x_1, \dots, x_n)$ be a multilinear commutator word. Assume that $H$ is a normal subgroup of a group $G$. Then there exixst a positive integer $t_n$ depending only on $n$ such that for every $ g_1, \dots , g_n \in G$, $ h_1, \dots , h_n \in G$ the $w$-value $w(g_1h_1,\dots,g_nh_n)$ can be written in the form: $w(g_1h_1,\dots,g_nh_n)=ah,$ where $a$ is a product of at most $t_n$ conjugates of elements in $\{g_1^{\pm 1}, \dots , g_n^{\pm 1}\}$ and $h\in H_w$. \end{lemma} \begin{proof} The proof is by induction on the number $n$ of variables appearing in $w$. If $n=1$ then $w=x$ and the result is true. If $n>1$, then $w$ is of the form $w=[u,v]$, where $u=u(x_1,\dots,x_r)$, $v=v(x_{r+1},\dots,x_n)$ are multilinear commutator words. By induction, $u(g_1h_1,\dots,g_rh_r)=a_1h_1$, $v(g_{r+1}h_{r+1,}\dots,g_nh_n)=a_2h_2$, where $a_1$ (resp. $a_2$) is a product of at most $t_r$ (resp. $t_{n-r}$) conjugates of elements in $S=\{g_1^{\pm 1}, \dots , g_n^{\pm 1}\}$, $h_1\in H_u$ and $h_2\in H_v$. By the standard commutator formulas we have that: $$w(g_1h_1,\dots,g_nh_n)=[a_1h_1,a_2h_2]=[a_1,a_2h_2]^{h_1}[h_1,a_2h_2]=$$ $$ [([a_1,h_2][a_1,a_2]^{h_2})^{h_1}[h_1,h_2][h_1,a_2]^{h_2}=$$ $$[a_1,h_2]^{h_1}[a_1,a_2]^{h_2h_1}[h_1,a_2]^{h_2}[h_1,h_2]^{[h_1,a_2]^{h_2}},$$ where $[a_1,h_2]=a_1^{-1}a_1^{h_2}$, $[a_1,a_2]=a_1^{-1}a_1^{a_2}$ are products of at most $2t_r$ conjugates of elements in $S$, $[h_1,a_2]=(a_2^{-1})^{h_1}a_2$ is a product of at most $2t_{n-r}$ conjugates of elements in $S$ and $[h_1,h_2]^{[h_1,a_2]^{h_2}}\in H_w$. So the result follows taking $t_n$ to be the maximum of the set $\{4t_r+2t_{n-r}|r=1,\dots,n-1\}$. \end{proof} \section{ Profinite groups in which $w$-values are FC-elements. }\label{sec:FC} The famous theorem of B. H. Neumann says that the commutator subgroup of a BFC-group is finite \cite{bhn}. This was recently extended in \cite{DMS-BFC} as follows. Let $w$ be a multilinear commutator word and $G$ a group in which $|x^G|\le m$ for every $w$-value $x$. Then the derived subgroup of $w(G)$ is finite of order bounded by a function of $m$ and $w$. The case where $w=[x,y]$ was handled in \cite{dieshu}. In the present article we require a profinite (non-quantitative) version of the above result. We show that if $G$ is a profinite group in which all $w$-values are FC-elements, then the derived subgroup of $w(G)$ is finite. In fact we establish a stronger result, which uses the concept of marginal subgroup. Let G be a group and $w=w(x_1,\dots,x_n)$ a word. The marginal subgroup $w^*(G)$ of $G$ corresponding to the word $w$ is defined as the set of all $x \in G$ such that $$w(g_1,\dots, x g_i,\dots,g_n)= w(g_1,\dots, g_i x ,\dots,g_n)=w(g_1,\dots,g_i,\dots,g_n)$$ for all $g_1,\dots,g_n \in G$ and $1 \le i \le n$. It is well known that $w^*(G)$ is a characteristic subgroup of $G$ and that $[w^*(G), w(G)]=1$. Note that marginal subgroups in profinite groups are closed. Let $S$ be a subset of a group $G$. Define the $w^*$-residual of $S$ of $G$ to be the intersection of all normal subgroups $N$ such that $SN/N$ is contained in the marginal subgroup $w^*(G/N)$. For multilinear commutator words the $w^*$-residual of a normal subgroup has the following characterization. \begin{lemma}\label{ts} Let $w$ be a multilinear commutator word, $G$ a group and $N$ a normal subgroup of $G$. Then the $w^*$-residual of $N$ in $G$ is the subgroup generated by the elements $w(g_1, \dots, g_n)$ where at least one of $g_1, \dots, g_n$ belongs to $N$. \end{lemma} This follows from \cite[Theorem 2.3]{TS}. For the reader's convenience, we will give here a proof in the spirit of Section \ref{sec:comb}. \begin{proof} Let $N_i=\langle w(g_1, \dots, g_n)|g_1,\dots,g_n\in G{\textrm{ and }}g_i\in N\rangle$ and let $R=N_1N_2\dots N_n$. Clearly, if $M$ is a normal subgroup of $G$ such that $N/M$ is contained in $w^*(G/M)$ then $N_i\le M$ for every $i=1,\dots, n$. Therefore $R$ is contained in the $w^*$-residual of $N$ On the other hand, it follows from Lemma \ref{M2} that if $N_i=1$, then $$w(g_1,\dots,g_ih,\dots,g_n)=w(g_1,\dots,g_i,\dots,g_n)$$ for every $g_1,\dots,g_n\in G$ and every $h$ in $N$. Thus we have $$w(g_1,\dots,g_ih,\dots,g_n)R=w(g_1,\dots,g_i,\dots,g_n)R$$ for every $i=1,\dots, n$, for every $g_1,\dots,g_n\in G$ and every $h\in N$. So $N/R$ is contained in $w^*(G/R)$. This implies the result. \end{proof} It follows form Lemma \ref{ts} that if $w$ is a multilinear commutator word and $N$ is a normal subgroup of a group $G$ which does not contain nontrivial $w$-values, then $N$ is contained $w^*(G)$ and, in particular, it centralizes $w(G)$. Indeed in this case, by Lemma \ref{ts}, the $w^*$-residual of $N$ in $G$ is trivial. A word $w$ is concise if whenever $G$ is a group such that the set $G_w$ is finite, it follows that also $w(G)$ is finite. Conciseness of multilinear commutators was proved by J.\,C.\,R. Wilson in \cite{jwilson} (see also \cite{GuMa}). \begin{lemma}\label{concise} Let $w$ be multilinear commutator word, $G$ a profinite group and $N$ an open normal subgroup of $G$. Then the $w^*$-residual of $N$ is open in $w(G)$. \end{lemma} \begin{proof} Let $K$ be the $w^*$-residual of $N$. As $N/K$ is contained in $w^*(G/K)$ and it has finite index in $G/K$, we deduce that the set of $w$-values of $G/K$ is finite. It follows from the above result of Wilson that $w(G/K)$ is finite, as desired. \end{proof} As above, $\Delta (G)} %{\sc{FC}(G) $ denotes the set of FC-elements of $G$. In what follows we will denote by $H$ the topological closure of $\Delta (G)} %{\sc{FC}(G)$ in a profinite group $G$. The goal of this section is to prove the following theorem. \begin{theorem}\label{genN} Let $w$ be a multilinear commutator word, $G$ a profinite group and $T$ a normal subgroup of $G$ such that every $w$-value of $G$ contained in $T$ is an FC-element. Then the $w^*$-residual of $T$ has finite commutator subgroup. \end{theorem} It is straightforward that the $w^*$-residual of $G$ is precisely $w(G)$. Thus Theorem \ref{genN} has the following consequence. \begin{corollary}\label{profinite-FC} Let $w$ be a multilinear commutator word and $G$ a profinite group in which every $w$-value is an FC-element. Then $w(G)$ has finite commutator subgroup. \end{corollary} The key result of the remaining part of this section is the next proposition, from which Theorem \ref{genN} will be deduced. \begin{proposition}\label{X} Let $w=w(x_1,\dots,x_n)$ be a multilinear commutator word, $G$ a profinite group and $H$ the topological closure of $\Delta (G)} %{\sc{FC}(G)$ in $G$. Assume that $A_1,\dots, A_n$ are normal subgroups of $G$ with the property that $$ \mathcal{X}_w(A_1,\dots,A_n) \subseteq \Delta (G)} %{\sc{FC}(G).$$ Then $[H , w(A_1,\dots,A_n) ]$ is finite. \end{proposition} The following lemma can be seen as a development related to Lem\-ma 2.4 in \cite{dieshu} and Lemma 4.5 in \cite{wie}. \begin{lemma} \label{basic-light} Assume the hypotheses of Proposition \ref{X}, with $A_1,\dots,$ $A_n$ being normal subgroups of $G$ with the property that $ \mathcal{X}_w(A_1,\dots,A_n)$ $\subseteq \Delta (G)} %{\sc{FC}(G).$ Let $M$ be an open normal subgroup of $G$ and $a_i\in A_i$ for $i=1,\dots,n$. Then there exist elements $\tilde a_i\in a_i (A_i\cap M)$ and an open normal subgroup $\tilde M$ of $M$, such that the order of $$[H,w(\tilde a_1 (A_1\cap \tilde M),\dots,\tilde a_n (A_n\cap \tilde M))]^G$$ is finite. \end{lemma} \begin{proof} Throughout the proof, whenever $K$ is a subgroup of $G$ we write $K_i$ for $ A_i\cap K$. For each natural number $j$ consider the set $\Delta_j$ of elements $g \in G$ such that $|G:C_G(g)| \le j$. Note that the sets $\Delta_j$ are closed (see for instance \cite[Lemma 5]{LP}). Consider the sets $$C_j=\{(y_1,\dots,y_n) \mid y_i\in a_iM_i {\textrm{ and }} w(y_1,\dots,y_n) \in \Delta_j\}.$$ Each set $C_j$ is closed, being the inverse image in $a_1 M_1 \times \cdots \times a_n M_n$ of the closed set $\Delta_j$ under the continuous map $(g_1, \dots , g_n) \mapsto w(g_1, \dots , g_n)$. Moreover the union of the sets $C_j$ is the whole $a_1 M_1 \times \cdots \times a_n M_n$. By the Baire category theorem (cf. \cite[p.\ 200]{Ke}) at least one of the sets $C_j$ has nonempty interior. Hence, there exist a natural number $m$, some elements $z_i\in a_i M_i$ and a normal open subgroup $Z$ of $G$ such that $$w(z_1 Z_1,\dots,z_nZ_n)\subseteq \Delta_{m}.$$ By replacing $Z$ with $Z\cap M$, if necessary, we can assume that $Z\le M$. Choose in $\mathcal{X}_w(z_1Z_1,\dots,z_n Z_n)$ an element $a=w(\tilde a_1,\dots,\tilde a_n)$ such that the number of conjugates of $a$ in $H$ is maximal among the elements of $\mathcal{X}_w(z_1 Z_1,\dots,z_n Z_n)$, that is, $|a^H|\ge |g^H|$ for any $g\in \mathcal{X}_w(z_1Z_1,\dots,$ $z_n Z_n)$. Since $\Delta (G)} %{\sc{FC}(G)$ is dense in $H$, we can choose a right transversal $b_1,\dots, b_r$ of $C_H(a)$ in $H$ consisting of FC-elements. Thus $a^H = \{a^{b_i} | i = 1, \dots, r\}$, where $a^{b_i}\ne a^{b_j}$ if $i\ne j$. Let $ \tilde M$ be the intersection of $Z$ and all $G$-conjugates of $ C_G ( b_1 ,\dots, b_r )$: $$\tilde M =\left(\bigcap_{g\in G} C_G ( b_1 ,\dots, b_r )^g \right) \cap Z$$ and note that $\tilde M$ is open in $G$. Consider the element $w(\tilde a_1v_1,\dots,\tilde a_nv_n)$ where $v_i \in \tilde M_i$ for $i=1,\dots,n$. As $w(\tilde a_1v_1,\dots,\tilde a_nv_n) \tilde M_i=a \tilde M_i$ in the quotient group $G/\tilde M_i$, we have $$w(\tilde a_1v_1,\dots,\tilde a_nv_n)=va,$$ for some $v\in \tilde M\le C_G ( b_1 ,\dots, b_r )$. It follows that $(va)^{b_i} = va^{b_i}$ for each $i =1,\dots, r$. Therefore the elements $va^{b_i}$ form the conjugacy class $(va)^H$ because they are all different and their number is the allowed maximum. So, for an arbitrary element $h\in H$ there exists $b\in\{b_1 ,\dots, b_r\}$ such that $(va)^h= va^b$ and hence $v^h a^h = va^b$. Therefore $[h, v] = v^{-h}v=a^h a^{-b}$ and so $[h, v]^a =a^{-1} a^h a^{-b} a = [a,h][b,a] \in [H,a].$ Thus $[H,v]^a \le [H,a]$ and $$[H, va]=[H,a] [H,v]^a \le [H, a].$$ Therefore $[H,w(\tilde a_1 \tilde M ,\dots,\tilde a_n \tilde M)]\le [H,a]$. Lemma \ref{2.3b} states that the abstract group $[ \Delta(G),a]^G$ has finite order and thus the same holds for $[H,a]^G$. The result follows. \end{proof} For the reader's convenience, the most technical part of the proof of Proposition \ref{X} is isolated in the following proposition. \begin{proposition}\label{inductive-step} Assume the hypotheses of Proposition \ref{X}, with $A_1,\dots, A_n$ being normal subgroups of $G$ such that $ \mathcal{X}_w(A_1,\dots,A_n) \subseteq \Delta (G)} %{\sc{FC}(G).$ Let $ I $ be a nonempty subset of $\{1,\dots,n\}$ and assume that there exist a normal subgroup $U$ of $G$ of finite order and an open normal subgroup $M$ of $G$ such that \[[H, w_J (A_i; A_l\cap M)]\le U \quad \textrm{for every}\ J \subsetneq I.\] Then there exist a finite normal subgroup $U_I$ of $G$ containing $U$ and an open normal subgroup $M_I$ of $G$ contained in $M$ such that \[ [H,w_I (A_i; A_l\cap M_I)]\le U_I.\] \end{proposition} \begin{proof} For each $i=1,\dots,n$ consider a right transversal $C_i$ of $A_i\cap M$ in $A_i$, and let $\Omega$ be the set of $n$-tuples $\underline{c}=(c_1, \dots , c_n)$ where $c_r \in C_r$ if $r\in I$ and $c_r=1$ otherwise. Note that the set $\Omega$ is finite, since $C_r$ is finite for every $r$. For any $n$-tuple $\underline{c}=(c_1, \dots , c_n) \in \Omega$, by Lemma \ref{basic-light}, there exist elements $d_i\in c_i(A_i\cap M)$ and an open normal subgroup $M_{\underline c}$ of $G$ such that the order of $$[H,w(d_1 (A_1\cap M_{\underline c}),\dots,d_n (A_n\cap M_{\underline c}))]^G$$ is finite. Let \begin{eqnarray*} M_I&=&M \cap \bigg( \bigcap_{\underline{c}\in \Omega}M_{\underline c}\bigg),\\ U_I&=& U \, \prod_{\underline{c}\in \Omega}[H,w(d_1 (A_1\cap M_{\underline c}),\dots,d_n (A_n\cap M_{\underline c}))]^G. \end{eqnarray*} As $\Omega$ is finite, it follows that $M_I$ is open in $G$ and $U_I$ has finite order. Let $Z/U_I$ be the center of $HU_I/U_I$ in the quotient group $G/U_I$ and let $\bar G=G/Z$. We will use the bar notation to denote images of elements or subgroups in the quotient group $\bar G$. Let us consider an arbitrary generator $w_I(k_i,h_l)$ of $w_I (A_i; A_l\cap M_I)$, where $k_i \in A_i$ and $h_l \in A_l\cap M_I$. Let $\underline{c}=(c_1, \dots , c_n) \in \Omega$ be the $n$-tuple such that $$k_i\in c_{i}(A_i \cap M)$$ if $i\in I$ and $c_i=1$ otherwise. Let $d_1,\dots,d_n$ be the elements as above, corresponding to the $n$-tuple $\underline{c}$. Then, by definition of $U_I$, $$[H,w(d_1 (A_1\cap M_I),\dots,d_n (A_n\cap M_I))]\le U_I,$$ that is $$\overline{w(d_1 (A_1\cap M_I),\dots,d_n (A_n\cap M_I))}=1,$$ in the quotient group $\bar G=G/Z$. We deduce from Lemma \ref{2.2-bis} that \begin{equation}\label{step} \overline{w_I(d_i (A_i\cap M_I);(A_l\cap M_I))}=1. \end{equation} Moreover, as $c_{i}(A_i\cap M)=d_i(A_i\cap M)$, we have that $k_i=d_iv_i$ for some $v_i\in A_i\cap M$. It also follows from our assumptions that $$\overline{w_J (A_i; A_l\cap M)}=1$$ for every proper subset $J$ of $I$. Thus we can apply Lemma \ref{M2} and obtain that $$w_I (\overline k_i;\overline h_l)=w_I (\overline d_i\overline v_i;\overline h_l)= w_I (\overline d_i;\overline h_l)=1,$$ where in the last equality we have used (\ref{step}). Since $w_I(k_i,h_l)$ was an arbitrary generator of $ w_I (A_i; A_l\cap M_I)$, it follows that $$\overline{w_I (A_i; A_l\cap M_I)}=1,$$ that is \[ [H,w_I (A_i; A_l\cap M_I)]\le U_I,\] as desired. \end{proof} \begin{proof}[Proof of Proposition \ref{X}.] Recall that $w=w(x_1,\dots,x_n)$ is a multilinear commutator word, $G$ is a profinite group, $H$ is the closure of $\Delta (G)} %{\sc{FC}(G)$ and $A_1,\dots, A_n$ are normal subgroup of $G$ with the property that $$ \mathcal{X}_w(A_1,\dots,A_n) \subseteq \Delta (G)} %{\sc{FC}(G).$$ We want to prove that $[H , w(A_1,\dots,A_n) ]$ is finite. We will prove that for every $s=0,\dots,n$ there exist a finite normal subgroup $U_s$ of $G$ and an open normal subgroup $M_s$ of $G$ such that whenever $I$ is a subset of $\{1,\dots,n\}$ of size at most $s$ we have \[ [H,w_I (A_i; A_l\cap M_s)]\le U_s.\] Once this is done, the proposition will follow taking $s=n$. Assume that $s=0$. We apply Lemma \ref{basic-light} with $M=G$ and $a_i=1$ for every $i=1,\dots,n$. Thus there exist $\tilde a_1,\dots, \tilde a_n \in G$ and an open normal subgroup $M_0$ of $G$, such that the order of $$U_0=[H,w( \tilde a_1 (A_1\cap M_0),\dots , \tilde a_n (A_n\cap M_0)]^G$$ is finite. Let $Z/U_0$ be the center of $HU_0/U_0$ in the quotient group $G/U_0$ and let $\bar G=G/Z$. We have that $$\overline{w( \tilde a_1 (A_1\cap M_0),\dots , \tilde a_n(A_n\cap M_0))}=1,$$ so it follows from Lemma \ref{2.2-bis} that $$\overline{w( A_1\cap M_0 ,\dots , A_n\cap M_0)}=1,$$ that is, $[H,w( A_1\cap M_0 ,\dots ,A_n\cap M_0)]\le U_0$. This proves the proposition in the case where $s=0$. Now assume that $s\ge 1$. Choose $I\subseteq\{1,\dots,n\}$ with $|I|=s$. By induction, the hypotheses of Proposition \ref{inductive-step} are satisfied with $U=U_{s-1}$ and $M=M_{s-1}$, so there exist a finite normal subgroup $U_I$ of $G$ containing $U_{s-1}$ and an open normal subgroup $M_I$ of $G$ contained in $M_{s-1}$ such that \[ [H,w_I (A_i; A_l\cap M_I)]\le U_I.\] Let $$M_s=\bigcap_{|I|=s}M_I, \quad U_s=\prod_{|I|=s}U_I,$$ where the intersection (resp. the product) ranges over all subsets $I$ of $\{1,\dots,n\}$ of size $s$. As there is a finite number of choices for $I$, it follows that $U_s$ (resp. $M_s$) has finite order (resp. finite index in $G$). Note that $M_s\le M_{s-1}$ and $U_{s-1}\le U_s$. Therefore \[ [H,w_I (A_i; A_l\cap M_s)]\le U_s\] for every $I\subseteq\{1,\dots,n\}$ with $|I|\le s$. This completes the induction and the proof of the proposition. \end{proof} \begin{proof}[Proof of Theorem \ref{genN}.] Let $w=w(x_1,\dots,x_n)$ be a multilinear commutator word, $G$ a profinite group and $T$ a normal subgroup of $G$. For $i=1, \dots n$, let $X_i$ be the set of $w$-values $w(g_1,...,g_n)$ such that $g_i$ belongs to $T$. Obviously $X_i \subseteq T$ and therefore $X_i \subseteq \Delta (G)} %{\sc{FC}(G)$ for every $i$. It follows from Proposition \ref{X} that $[H, \langle X_i \rangle]$ is finite for every $i$. By Lemma \ref{ts}, the $w^*$-residual of $T$ is the subgroup $N$ generated by the set $X= X_1 \cup \dots \cup X_n $. Thus $[H, N]= \prod_{i=1}^{n}[H,\langle X_i \rangle]$ is finite. Finally, note that $N\le H$ and so $N'\le [H,N]$ is also finite. \end{proof} \begin{corollary}\label{infinite} Let $w$ be a multilinear commutator word and let $G$ be a profinite group with restricted centralizers of $w$-values. If $G$ has a $w$-value of infinite order, then $w(G)$ is abelian-by-finite. \end{corollary} \begin{proof} Let $x$ be a $w$-value of $G$ of infinite order. As $C_G(x)$ is open, it contains an open normal subgroup $C$ of $G$. Let $K$ be the $w^*$-residual of $C$ in $G$. Since all $w$-values contained in $C$ have infinite centralizers, we apply Theorem \ref{genN} and conclude that $K'$ is finite. Being finite-by-abelian, $K$ is also abelian-by-finite. It follows from Lemma \ref{concise} that $K$ has finite index in $w(G)$ and so $w(G)$ is abelian-by-finite. \end{proof} \section{Pronilpotent groups with restricted centralizers of $w$-values}\label{sec:pro-p} In the present section we use the techniques created by Zelmanov to deduce a theorem about pronilpotent groups with restricted centralizers of $w$-values (see Theorem \ref{pro-p}). A combination of this result with Corollary \ref{infinite} yields a proof of Theorem \ref{main} for pronilpotent groups. For the reader's convenience we collect some definitions and facts on Lie algebras associated with groups (see \cite{S-Lie} or \cite{Z} for further information). Let $L$ be a Lie algebra over a field. We use the left normed notation; thus if $l_1,\dots,l_n$ are elements of $L$ then $$[l_1,\dots,l_n]=[\dots[[l_1,l_2],l_3],\dots,l_n].$$ An element $y\in L$ is called ad-nilpotent if $ad\, y$ is nilpotent, i.e. there exists a positive integer $n$ such that $[x,{}_ny]=0$ for all $x\in L$. If $n$ is the least integer with the above property then we say that $y$ is ad-nilpotent of index $n$. Let $X$ be any subset of $L$. By a commutator in elements of $X$ we mean any element of $L$ that could be obtained from elements of $X$ by repeated operation of commutation with an arbitrary system of brackets, including the elements of $X$. Here the elements of $X$ are viewed as commutators of weight 1. Denote by $F$ the free Lie algebra over the same field as $L$ on countably many free generators $x_1,x_2,\dots$. Let $f=f(x_1,\dots,x_n)$ be a nonzero element of $F$. The algebra $L$ is said to satisfy the identity $f\equiv 0$ if $f(a_1,\dots,a_n)=0$ for any $a_1,\dots,a_n\in L$. In this case we say that $L$ is $PI$. We are now in position to quote a theorem of Zelmanov \cite{Z,ze3} which has numerous important applications to group theory. A detailed proof of this result recently appeared in \cite{ze4}. \begin{theorem} \label{zzz} Let $L$ be a Lie algebra generated by finitely many elements $a_1, \dots,a_m$ such that all commutators in $a_1, \dots,a_m$ are ad-nilpotent. If $L$ is $PI$, then it is nilpotent. \end{theorem} Let $G$ be a group. Recall that the lower central word $[x_1,\ldots,x_k]$ is usually denoted by $\gamma_{k}$. The corresponding verbal subgroup $\gamma_k(G)$ is the familiar $k$th term of the lower central series of the group $G$. Given a prime $p$, a Lie algebra can be associated with the group $G$ as follows. We denote by $$D_i=D_i(G)= \prod_{jp^k \ge i} \left(\gamma_j(G)\right)^{p^k}$$ the $i$th dimension subgroup of $G$ in characteristic $p$ (see for example \cite[Chap. 8]{hb}). These subgroups form a central series of $G$ known as the Zassenhaus-Jennings-Lazard series. Set $L(G)=\bigoplus D_i/D_{i+1}$. Then $L(G)$ can naturally be viewed as a Lie algebra over the field $\mathbb F_p$ with $p$ elements. For an element $x\in D_i\setminus D_{i+1}$ we denote by $\tilde x$ the element $xD_{i+1}\in L(G)$. \begin{lemma}[Lazard, \cite{la}]\label{Laz} For any $x\in G$ we have $(ad\,\tilde{x})^p=ad\,(\tilde{x^p})$. \end{lemma} The next proposition follows from the proof of the main theorem in the paper of Wilson and Zelmanov \cite{WZ}. \begin{proposition}\label{prop:WZ} Let $G$ be a group satisfying a coset identity. Then $L(G)$ is $PI$. \end{proposition} Let $L_p(G)$ be the subalgebra of $L(G)$ generated by $D_1/D_2$. Often, important information about the group $G$ can be deduced from nilpotency of the Lie algebra $L_p(G)$. \begin{proposition}\cite[Corollary 2.14]{S-Lie}\label{prop:2.14} Let $G$ be a group generated by elements $a_1, a_2, \dots , a_m$ such that every $\gamma_k$-value in $a_1, a_2, \dots , a_m$ has finite order, for every $k$. Assume that $L_p(G)$ is nilpotent. Then the series $\{D_i\}$ becomes stationary after finitely many steps. \end{proposition} Let $P$ be a Sylow subgroup of a finite group $G$. An immediate corollary of the Focal Subgroup Theorem \cite[Theorem 7.3.4]{Gor} is that $G'\cap P$ is generated by commutators. A weaker version of this fact for multilinear commutator words was proved in \cite[Theorem A]{AFS}. \begin{proposition}\label{focal} Let $G$ be a finite group and $P$ a Sylow subgroup of $G$. If $w$ is a multilinear commutator word, then $w(G) \cap P$ is generated by powers of $w$-values. \end{proposition} \begin{proposition}\label{prop:abstract} Let $p$ be a prime, $w$ a multilinear commutator word and $G$ a profinite group in which all $w$-values have finite $p$-power order. Let $K$ be the abstract subgroup of $G$ generated by all $w$-values. Then $K$ is a locally finite $p$-group. \end{proposition} \begin{proof} It follows from Proposition \ref{focal} that $w(G)$ is a pro-$p$ group. Indeed if $Q$ is a Sylow $q$-subgroup of $w(G)$, then the image of $Q$ in any finite continuous image of $G$ is generated by powers of $w$-values, which are $p$-elements, hence $Q=1$ unless $q=p$. By Lemma \ref{lem:delta_k} there exists an integer $k$ such that each $\delta_k$-value is a $w$-value. It is sufficient to prove that the abstract subgroup $R$ generated by all $\delta_k$-values is locally finite. Indeed, the abstract group $G/R$ is a soluble group such that all $w$-values have finite order. Hence $w(G/R)$ is locally finite by Lemma \ref{lem:4.2}. Let $X$ be the set of $\delta_k$-values of $G$. Every finitely generated subgroup of $R$ is contained in a subgroup generated by a finite subset of $X$. So we choose finitely many elements $a_1, \dots, a_s$ in $X$ and consider the subgroup $H$ topologically generated by $a_1, \dots, a_s$. It is sufficient to prove that $H$ is finite. Note that $H$ is a pro-$p$ group, since it is a subgroup of $w(G)$. For every positive integer $t$, consider the set $$S_t=\{ (h_1, \dots, h_{2^k}) \mid h_i\in H \ {\textrm{and}}\ \delta_k(h_1, \dots, h_{2^k})^{p^t}=1 \}.$$ These sets are closed and their union is the whole Cartesian product of $2^k$ copies of $H$. By the Baire category theorem at least one of the sets $S_t$ has nonempty interior. Hence, there exist a natural number $m$, some elements $y_i\in H$ and a normal open subgroup $Z$ of $H$ such that $$\delta_k(y_1 Z,\dots,y_{2^k}Z)^{p^m}=1.$$ In particular $H$ satisfies a coset identity. Let $L=L_p(H)$ be the Lie algebra associated with the Zassenhaus-Jennings-Lazard series $\{ D_i \}$. Then $L$ is generated by $\tilde{a}_i=a_i D_2$ for $i=1, \dots , s$. Let $b$ any Lie-commutator in $\tilde{a}_1, \dots, \tilde{a}_s$ and let $c$ be the group-commutator in $a_1, \dots, a_s$ having the same system of brackets as $b$. Since $X$ is commutator closed, $c$ is a $\delta_k$-value and so it has finite order. By Lemma \ref{Laz} this implies that $b$ is ad-nilpotent. As $H$ satisfies a coset identity, it follows from Proposition \ref{prop:WZ} that $L$ satisfies some nontrivial polynomial identity. By Theorem \ref{zzz} we conclude that $L$ is nilpotent. As every $\gamma_k$-value in $a_1, \dots, a_s$ has finite order, Proposition \ref{prop:2.14} shows that the series $\{D_i\} $ has only finitely many nontrivial terms. Since $H$ is a pro-$p$ group, it follows that the intersection of all $D_i$'s is trivial. Taking into account that each $D_i$ has finite index in $H$, we deduce that $H$ is finite. This proves that $R$ is locally finite and the proposition follows. \end{proof} \begin{theorem}\label{pro-p} Let $w$ be a multilinear commutator word and let $G$ be a pronilpotent group with restricted centralizers of $w$-values in which every $w$-value has finite order. Then the derived subgroup of $w(G)$ is finite. \end{theorem} \begin{proof} First assume that $G$ is a pro-$p$ group. Let $K$ be the abstract subgroup of $G$ generated by all $w$-values. By Proposition \ref{prop:abstract} $K$ is a locally finite $p$-group. If a $w$-value of $G$ has finite centralizer, then $K$ is finite by Lemma \ref{114}. Since $K$ is dense in $w(G)$, we conclude that $w(G)$ is finite. Therefore we can assume that every $w$-value in $G$ is an FC-element and so the result follows from Corollary \ref{profinite-FC}. When $G$ is pronilpotent, it is the Cartesian product of its Sylow subgroups. Let $\mathcal P$ be the set of primes $p$ such that $w(P) \neq 1$ where $P$ is the Sylow $p$-subgroup of $G$. If $\mathcal P$ is infinite, then $G$ has a $w$-value of infinite order, against our assumption. Thus $\mathcal P$ is finite. If $P$ is a Sylow $p$-subgroup of $G$, then the derived subgroup of $w(P)$ is finite by what we proved above. Therefore the derived subgroup of $w(G)=\prod_{p \in \mathcal P} w(P)$ is finite, as desired. \end{proof} \section{Local finiteness of $w(G)$ }\label{sec:locfin} The goal of the present section is to show that if the hypotheses of Theorem \ref{main} hold and all $w$-values have finite order, then $w(G)$ is locally finite. There is a long-standing conjecture stating that each torsion profinite group has finite exponent (cf. Hewitt and Ross \cite{HR}). The conjecture can be easily proved for soluble groups (cf. \cite[Lemma 4.3.7]{ribes-zal}). In \cite{DMS-2015} this was extended as follows. \begin{proposition}\cite[Theorem 3]{DMS-2015}\label{2015} Let $w$ be a multilinear commutator word and $G$ a soluble-by-finite profinite group in which all $w$-values have finite order. Then $w(G)$ is locally finite and has finite exponent. \end{proposition} We remark that the above result does not follow from Lemma \ref{lem:4.2} and its proof is significantly more complicated. Given a word $w$ and a subgroup $P$ of a profinite group $G$, we denote by $W(P)$ the closed subgroup generated by all elements of $P$ that are conjugate in $G$ to elements of $P_w$: $$W(P)= \langle {P_{w}}^G\cap P\rangle.$$ Let $\mathcal{Y}}% {\mathscr{Y}_w$ be the class of all profinite groups $G$ in which all $w$-values have finite order and the subgroup $W(P)$ is periodic for any Sylow subgroup $P$ of $G$. The following theorem was implicitly established in \cite{KS}. We will now reproduce the proof. \begin{theorem}\label{prop-KS} Let $w$ be a multilinear commutator word and let $G$ be a profinite group in the class $\mathcal{Y}}% {\mathscr{Y}_w$. Then $w(G)$ is locally finite. \end{theorem} \begin{proof} Recall that finite groups of odd order are soluble by the Feit-Thompson theorem \cite{FT}. Combining this with \cite[Theorem 1.5]{KS} (applied with $p = 2$), we deduce that $G$ has a finite series of closed characteristic subgroups \begin{equation}\label{5.1} G = G_0 \ge G_1 \ge \cdots \ge G_s = 1 \end{equation} in which each factor either is prosoluble or is isomorphic to a Cartesian product of nonabelian finite simple groups. There cannot be infinitely many nonisomorphic nonabelian finite simple groups in a factor of the second kind, since this would give a $w$-value of infinite order. Indeed, by a result of Jones \cite{J}, any infinite family of finite simple groups generates the variety of all groups; therefore, the orders of $w$-values cannot be bounded on such an infinite family. Thus, we can assume in addition that each nonprosoluble factor in (\ref{5.1}) is isomorphic to a Cartesian product of isomorphic nonabelian finite simple groups. We use induction on $s$. If $s = 0$, then $G=1$ and the result follows. Let $s \ge 1$. By induction, $w(G_1)$ is locally finite. Passing to the quotient $G/w(G_1)$, we can assume that $G_1$ is soluble. If $G/G_1$ is isomorphic to a Cartesian product of isomorphic nonabelian finite simple groups, then $G/G_1$ is locally finite and the result follows from \cite[Lemma 5.6]{KS}. If $G/G_1$ is prosoluble, then so is $G$, and then by \cite[Proposition 5.12]{KS} $G$ has a series of finite length with pronilpotent quotients. In this case, $w(G)$ is locally finite by \cite[Lemma 5.7]{KS}, as required. \end{proof} \begin{proposition}\label{locfin} Let $w$ be a multilinear commutator word and let $G$ be a profinite group with restricted centralizers of $w$-values. Assume that every $w$-value has finite order. Then $w(G)$ is locally finite. \end{proposition} \begin{proof} By Lemma \ref{lem:delta_k} there exists an integer $k$ such that each $\delta_k$-value is a $w$-value. Set $u=\delta_{2k}$. Let us show that $G\in \mathcal{Y}}% {\mathscr{Y}_u$, that is, $$U(P)=\langle P_{u}^G\cap P\rangle$$ is periodic for every Sylow subgroup of $P$ of $G$. Let $P$ be a Sylow subgroup of $G$. It follows from Theorem \ref{pro-p} that $w(P)'$ is a finite $p$-group, so $w(P)$ is soluble. In view of Lemma \ref{lem:delta_k} we have $P^{(k)}\le w(P)$ and so $P$ is soluble. By Proposition \ref{2015}, $P^{(k)}$ is locally finite and has finite exponent. In particular $P^{(k)}$ is locally nilpotent. If $P$ is finite then also $U(P)$ is finite so we can assume that $P$ is infinite. If some element $x\in P_{\delta_k}$ has finite centralizer we get a contradiction, because on the one hand $x^P$ is infinite, on the other hand $x^P$ is contained in $P^{(k)}$, which is finite by Lemma \ref{114}. Thus we can assume that the centralizer of each element in $P_{\delta_k}$ is infinite. As $G$ has restricted centralizers of $w$-values and every $\delta_k$-value is also a $w$-value, it follows that each element in $P_{\delta_k}$ has centralizer of finite index in $G$. Consider the sets $$C_j=\{(y_1,\dots,y_{2^k}) \mid y_i\in P{\textrm{ and }} |\delta_k(y_1,\dots,y_{2^k})^G|\le j\}.$$ Note that each set $C_j$ is closed. Moreover their union is the whole Cartesian product of $2^k$ copies of $P$. By the Baire category theorem at least one of the sets $C_j$ has nonempty interior. Hence, there exist a natural number $m$, some elements $a_i\in P$ and an open normal subgroup $T$ of $P$ such that $$\mathcal{X}_{\delta_k}(a_1T,\dots,a_{2^k}T)\subseteq C_{m}.$$ We deduce from Lemma \ref{comb1} that there exists a positive integer $m_1$ such that each element in $T_{\delta_k}$ has at most $m_1$ conjugates. Let $T_0=T\cap P^{(k)}$. As $P^{(k)}$ is topologically generated by $P_{\delta_k}$, we can choose a right transversal $b_1,\dots, b_r$ of $T_0$ in $P^{(k)}$ consisting of finite products of elements in $P_{\delta_k}$. Of course $b_1,\dots, b_r$ are FC-elements and thus there exists a positive integer $m_2$ such that each $b_i$ has at most $m_2$ conjugates. Let $x\in P_{u}$. We have $$x=\delta_k(c_1,\dots,c_{2^k}),$$ where $c_i\in P_{\delta_k}$ for $i=1,\dots,2^k$. Now each $c_i$ is of the form $c_i=g_ih_i$ where $g_i\in\{b_1,\dots,b_r\}$ and $h_i\in T_0$. It follows from Lemma \ref{comb2} that $x=ah$ where $a$ is the product of at most $t_{2^k}$ conjugates of elements in $\{b_1^{\pm 1},\dots,b_r^{\pm 1}\}$ and $h\in T_{\delta_k}$. As each $b_i$ has at most $m_2$ conjugates and $h$ has at most $m_1$ conjugates it follows that $x$ has at most $m_3$ conjugates for some positive integer $m_3$ which does not depend on $x$. So each $x\in P_{u}$ has order dividing $e$, where $e$ is the exponent of $P^{(k)}$, and has at most $m_3$ conjugates. Recall that $U(P)=\langle P_{u}^G\cap P\rangle$. It follows from Lemma \ref{sol1} that $U(P)$ has finite exponent. This proves that $G\in\mathcal{Y}}% {\mathscr{Y}_u$. We deduce from Theorem \ref{prop-KS} that $G^{(2k)}$ is locally finite. Thus we can pass to the quotient group $G/G^{(2k)}$ and assume that $G^{(2k)}=1$. Now the result follows from Proposition \ref{2015}. \end{proof} \section{Proof of Theorem \ref{main}}\label{sec:final} We recall that the Hirsch-Plotkin radical of an (abstract) group is defined as the maximal normal locally nilpotent subgroup. In a profinite group the Hirsch-Plotkin radical need not be closed. However, in the particular case where the profinite group is locally finite, the Hirsch-Plotkin radical is closed. Indeed the closure of an abstract locally nilpotent subgroup is pronilpotent in any profinite group, and so it is locally nilpotent if the group is locally finite. An important result about profinite torsion groups is the following theorem due to J. S. Wilson. \begin{theorem}\cite[Theorem 1]{Will:torsion}\label{thm:wil} Let $G$ be a compact Hausdorff torsion group. Then $G$ has a finite series \[ 1=G_0 \le G_1 \le \dots \le G_s \le G_{s+1}=G \] of closed characteristic subgroups, in which each factor $G_{i+1}/G_{i}$ either is a pro-$p$ group for some prime $p$ or is isomorphic (as a topological group) to a Cartesian product of finite simple groups. \end{theorem} In particular, a profinite locally soluble torsion group has a finite series of characteristic subgroups in which each factor is a pro-$p$ group for some prime $p$. \begin{proof}[Proof of Theorem \ref{main}] Recall that $w$ is a multilinear commutator word and $G$ a profinite group with restricted centralizers of $w$-values. We want to prove that $w(G)$ is abelian-by-finite. If $G$ has a $w$-value of infinite order, then by Corollary \ref{infinite} the subgroup $w(G)$ is abelian-by-finite. So we can assume that every $w$-value has finite order. It follows from Proposition \ref{locfin} that $w(G)$ is locally finite. By Theorem \ref{thm:wil}, $w(G)$ has a finite series of characteristic subgroups \[ 1=A_0 \le A_1 \le \dots \le A_s \le A_{s+1}=w(G) \] in which each factor either is a pro-$p$ group for some prime $p$ or is isomorphic to a Cartesian product of finite simple groups. Let $A/B$ be a factor in the series which is isomorphic to a Cartesian product of finite simple groups. Recall that the famous Ore's conjecture, stating that every element of a nonabelian finite simple group is a commutator, was proved in \cite{lost}. It follows that every element of a nonabelian finite simple group is a $w$-value, therefore every element in $A/B$ is a $w$-value. We deduce from Lemma \ref{KK} that $A/B$ is a profinite group with restricted centralizers. By Shalev's result \cite{shalev}, $A/B$ is abelian-by-finite and therefore finite. Since all non-pronilpotent factors in the above series are finite, we derive that $w(G)$ is prosoluble-by-finite. Moreover $w(G)$ has an open characteristic subgroup $K$, which in turn has a finite characteristic series $$1= F_0 \le F_1 \le F_2 \le \dots \le F_r \le F_{r+1} = K $$ where $F_{i+1}/F_i$ is the Hirsch-Plotkin radical of $K/F_i$, for every $i$. Alternatively, the existence of such a subgroup $K$ could be shown using theorems of Hartley \cite{H} and Dade \cite{Dade}. Let $j$ be the maximal index such that all $w$-values contained in $F_j$ are FC-elements. If $j=r+1$, then by Corollary \ref{profinite-FC} we conclude that $w(G)$ is finite-by-abelian, hence abelian-by-finite. So assume now that $j \le r$. Then there exists a $w$-value whose centralizer in $G$ is finite. As $w(G)$ is locally finite, Lemma \ref{KK} guarantees that $F_{j+1}/F_j$ has an element with finite centralizer. Thus $F_{j+1}/F_j$ satisfies the hypothesis of Lemma \ref{114}, hence it is finite. Since $F_{j+1}/F_j$ is the Hirsch-Plotkin radical of $K/F_j$, it contains its centralizer in $K/F_j$. Taking into account that $F_{j+1}/F_j$ is finite, we conclude that its centralizer in $K$ has finite index. Therefore $F_{j+1} $ has finite index in $K$. We deduce that $F_j$ has finite index in $w(G)$. Let $T$ be the $w^*$-residual of $F_j$. Since every $w$-value in $F_j$ is an FC-element, we can apply Theorem \ref{genN} and we obtain that $T'$ is finite. Hence, $T$ is abelian-by-finite. Note that $F_j/T$ is contained in $w^*(G/T)$, hence it centralizes $w(G/T)$. By Lemma \ref{KK} the verbal subgroup $w(G/T)$ has an element with finite centralizer, so we deduce that $F_j/T$ is finite. Thus $T$ is open in $w(G)$ and we conclude that $w(G)$ is abelian-by-finite, as desired. \end{proof} In the sequel, we will use the fact that an abelian-by-finite group contains a characteristic abelian subgroup of finite index (see \cite[Ch. 12, Lemma 1.2]{Passman} or \cite[Lemma 21.1.4]{Ka}). \begin{proof}[Proof of Corollary \ref{openT}] Recall that $w$ is a multilinear commutator word and $G$ a profinite group in which centralizers of $w$-values are either finite or open. If follows from Theorem \ref{main} that $w(G)$ is abelian-by-finite. In particular $w(G)$ has an open characteristic abelian subgroup $N$. As $w(G)/N$ is finite, there exists an open normal subgroup $T$ of $G$ containing $N$, such that $T/N$ intersects $w(G)/N$ trivially. Since $w(T) \le T \cap w(G) \le N$, we conclude that $w(T)$ is abelian, as desired. The solubility of $T$ is immediate from Lemma \ref{lem:delta_k}. \end{proof} \begin{proof}[Proof of Corollary \ref{profinite-finite}] Recall that $w$ is a multilinear commutator word and $G$ a profinite group in which every $w$-value has finite centralizer. Assume that $w(G) \neq 1$. If follows from Theorem \ref{main} that $w(G)$ is abelian-by-finite. In particular, $w(G)$ has an open characteristic abelian subgroup $N$. If $N$ contains a nontrivial $w$-value, then $N$ is finite, by assumption. Therefore we can assume that $N \cap G_w=1$. It follows from the remark following Lemma \ref{ts} that $N$ is contained in $w^*(G)$. Since the marginal subgroup centralizes $w(G)$, we deduce that $N$ is finite. This proves that $w(G)$ is finite. Hence, $C_G(w(G))$ has finite index in $G$. We see that $C_G(w(G))$ is both finite and of finite index, which proves that $G$ is finite. \end{proof} As a final remark, we point out that in \cite{shalev} Shalev actually proved that if $G$ is a profinite group with restricted centralizers then $\Delta (G)$ has finite index in $G$ and finite commutator subgroup. Our proof of Theorem \ref{main} implies that if $w$ is a multilinear commutator word and $G$ a profinite group with restricted centralizers of $w$-values, then the closed subgroup generated by $G_w \cap \Delta(G)$ has finite index in $w(G)$ and finite commutator subgroup. % % % \section*{Acknowledgements} The third author was partially supported by FAPDF and CNPq.
!! Copyright (C) Stichting Deltares, 2012-2016. !! !! This program is free software: you can redistribute it and/or modify !! it under the terms of the GNU General Public License version 3, !! as published by the Free Software Foundation. !! !! This program is distributed in the hope that it will be useful, !! but WITHOUT ANY WARRANTY; without even the implied warranty of !! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the !! GNU General Public License for more details. !! !! You should have received a copy of the GNU General Public License !! along with this program. If not, see <http://www.gnu.org/licenses/>. !! !! contact: delft3d.support@deltares.nl !! Stichting Deltares !! P.O. Box 177 !! 2600 MH Delft, The Netherlands !! !! All indications and logos of, and references to registered trademarks !! of Stichting Deltares remain the property of Stichting Deltares. All !! rights reserved. module putget_mod ! ! Generic module for putget operations on NEFIS files ! ! module declarations ! ! ! data definition module(s) ! use precision_part ! single and double precisionv use timers ! ! module procedure(s) ! use noextspaces_mod ! explicit interface for subroutine calls ! implicit none ! force explicit typing ! interface putget module procedure putget_int ! for reading/ writing integer (scalar) module procedure putget_int1D ! for reading/ writing integer 1D arrays module procedure putget_int2D ! for reading/ writing integer 2D arrays module procedure putget_real ! for reading/ writing real (scalar) module procedure putget_real1D ! for reading/ writing real 1D arrays module procedure putget_real2D ! for reading/ writing real 2D arrays module procedure putget_char ! for reading/ writing character (scalar) module procedure putget_char1D ! for reading/ writing character arrays end interface ! ! module data ! ! integer(ip), parameter :: start=1 ,stopp=2 ,incr=3 integer(ip), parameter :: no_groups = 5 ! integer(ip) :: buflen,elmndm integer(ip) :: inef ,ierror, jnef integer(ip) :: lelmnr integer(ip) :: nnef integer(ip) :: igr ! integer(ip) :: datlen integer(ip) :: deflen integer(ip),dimension(10) :: usrord integer(ip),dimension(3,no_groups) :: uindex integer(ip),dimension(5) :: elmdim ! character(len= 1) :: access character(len= 1) :: coding character(len= 8) :: elmtap character(len= 16) :: elmqta,elmant character(len= 64) :: elmdas character(len= 256) :: datnam character(len= 256) :: defnam character(len= 1023) :: errstr ! !-external functions ! integer(ip) :: clsnef, credat, crenef, defcel, defelm, & defgrp, getelt, inqelm, neferr, putelt, & putels, getels external :: clsnef, credat, crenef, defcel, defelm, & defgrp, getelt, inqelm, neferr, putelt, & putels, getels ! save fd_nef integer(ip) :: fd_nef = -1 contains !--------------------------------------------------------------------------- ! putget_int : specific procedure for reading/ writing an INTEGER !--------------------------------------------------------------------------- subroutine putget_int (filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,ibuffr ) ! integer(ip),dimension( :, :) :: elt_dims integer(ip),dimension( :) :: elt_bytes integer(ip) :: celidt,nelems,error ! integer(ip) :: ibuffr integer(ip),dimension(1) :: buffr ! array is placed on stack character(len=*),dimension(:) :: elt_names,elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_int", ithndl ) ! buffr(1) = ibuffr ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getelt(fd_nef,grpnam,elmnam,uindex,usrord,buflen,buffr) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) then ierror = neferr(1, errstr) write(*,*) trim(errstr) endif 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_int !----------------------------------------------------------------------------- ! putget_int1D: specific procedure for reading/ writing 1D integer arrays !----------------------------------------------------------------------------- subroutine putget_int1D(filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,buffr ) implicit none ! integer(ip),dimension(:,:) :: elt_dims integer(ip),dimension( :) :: elt_bytes integer(ip) :: celidt,nelems,error ! integer(ip),dimension(:) :: buffr character(len=*),dimension(:) :: elt_names,elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog ! save fd_nef integer :: fd_nef = -1 integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_int1D", ithndl ) ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getelt(fd_nef,grpnam,elmnam,uindex,usrord,buflen,buffr) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) then ierror = neferr(1, errstr) write(*,*) trim(errstr) endif 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_int1D !----------------------------------------------------------------------------- ! putget_int2D: specific procedure for reading/ writing 2D integer arrays !----------------------------------------------------------------------------- subroutine putget_int2D(filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,ibuffr ) implicit none ! integer(ip),dimension(:,:) :: elt_dims integer(ip),dimension(:) :: elt_bytes integer(ip) :: celidt,nelems,error ! integer(ip),dimension(:,:) :: ibuffr integer(ip),dimension(:), allocatable :: buffr ! to prevent that the array is placed on the stack character(len=*),dimension(:) :: elt_names,elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog ! save fd_nef integer :: fd_nef = -1 integer :: k integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_int2D", ithndl ) ! ! First transform 2D into 1D work array ! allocate(buffr(size(ibuffr))) ! ! buffr = reshape(ibuffr,(/size(ibuffr)/)) ! done on the stack k = 0 do jnef = 1, size(ibuffr,2) do inef = 1, size(ibuffr,1) k = k+1 buffr(k) = ibuffr(inef,jnef) enddo enddo ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getelt(fd_nef,grpnam,elmnam,uindex,usrord,buflen,buffr) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) then ierror = neferr(1, errstr) write(*,*) trim(errstr) endif 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_int2D !--------------------------------------------------------------------------- ! putget_real : specific procedure for reading/ writing a REAL !--------------------------------------------------------------------------- subroutine putget_real & (filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,rbuffr ) implicit none ! ! integer(ip),dimension(:,:) :: elt_dims integer(ip),dimension(:) :: elt_bytes integer(ip) :: celidt,nelems,error ! real(sp) :: rbuffr real(sp),dimension(1) :: buffr ! array is placed on stack character(len=*),dimension(:) :: elt_names , elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog ! save fd_nef integer :: fd_nef = -1 integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_real", ithndl ) ! buffr(1) = rbuffr ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getelt(fd_nef,grpnam,elmnam,uindex,usrord,buflen,buffr) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) then ierror = neferr(1, errstr) write(*,*) trim(errstr) endif 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_real !--------------------------------------------------------------------------- ! putget_real1D: specific procedure for reading/ writing 1D real arrays !--------------------------------------------------------------------------- subroutine putget_real1D & (filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,buffr ) implicit none ! ! integer(ip),dimension(:,:) :: elt_dims integer(ip),dimension(:) :: elt_bytes integer(ip) :: celidt,nelems,error ! real(sp),dimension(:) :: buffr character(len=*),dimension(:) :: elt_names , elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog ! save fd_nef integer :: fd_nef = -1 integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_real1D", ithndl ) ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getelt(fd_nef,grpnam,elmnam,uindex,usrord,buflen,buffr) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) then ierror = neferr(1, errstr) write(*,*) trim(errstr) endif 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_real1D !--------------------------------------------------------------------------- ! putget_real1D: specific procedure for reading/ writing 2D real arrays !--------------------------------------------------------------------------- subroutine putget_real2D & (filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,rbuffr ) implicit none ! ! integer(ip),dimension(:,:) :: elt_dims integer(ip),dimension(:) :: elt_bytes integer(ip) :: celidt,nelems,error ! real(sp),dimension(:,:) :: rbuffr real(sp),dimension(:), allocatable :: buffr ! to prevent that the array is placed on the stack character(len=*),dimension(:) :: elt_names , elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog ! save fd_nef integer :: fd_nef = -1 integer :: k integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_real2D", ithndl ) ! ! First transform 2D into 1D work array ! allocate(buffr(size(rbuffr))) ! buffr = reshape(rbuffr,(/size(rbuffr)/)) ! done on the stack k=0 do jnef = 1, size(rbuffr,2) do inef = 1, size(rbuffr,1) k = k+1 buffr(k) = rbuffr(inef,jnef) enddo enddo ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getelt(fd_nef,grpnam,elmnam,uindex,usrord,buflen,buffr) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putelt(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) then ierror = neferr(1, errstr) write(*,*) trim(errstr) endif 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_real2D !--------------------------------------------------------------------------- ! putget_char: specific procedure for reading/writing a CHARACTER !--------------------------------------------------------------------------- subroutine putget_char & (filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,cbuffr ) implicit none ! integer(ip),dimension(:,:) :: elt_dims integer(ip),dimension(:) :: elt_bytes integer(ip) :: celidt,nelems,error ! character(len=*) :: cbuffr character(len=len(cbuffr)),dimension(1) :: buffr ! array is placed on stack character(len=*),dimension(:) :: elt_names,elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog ! save fd_nef integer :: fd_nef = -1 integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_char", ithndl ) ! buffr(1) = cbuffr ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putels(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getels(fd_nef,grpnam,elmnam, & uindex,usrord ,buflen,buffr ) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putels(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) ierror = neferr(1, errstr) 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_char !--------------------------------------------------------------------------- ! putget_char1D: specific procedure for reading/writing a CHARACTER array !--------------------------------------------------------------------------- subroutine putget_char1D & (filnam ,grpnam ,nelems ,elt_names , & elt_dims ,elt_types ,elt_bytes ,elmnam , & celidt ,wrilog ,error ,buffr ) implicit none ! integer(ip),dimension(:,:) :: elt_dims integer(ip),dimension(:) :: elt_bytes integer(ip) :: celidt,nelems,error ! character(len=*),dimension(:) :: buffr character(len=*),dimension(:) :: elt_names,elt_types character(len=*) :: elmnam,filnam,grpnam ! logical :: wrilog ! save fd_nef integer :: fd_nef = -1 integer(4) ithndl ! handle to time this subroutine data ithndl / 0 / if ( timon ) call timstrt( "putget_char1D", ithndl ) ! !-----initialization ! coding = 'n' elmndm = 5 do igr=1,no_groups usrord(igr) = 1 uindex(start,igr) = celidt uindex(stopp,igr) = celidt uindex(incr,igr) = 1 end do ! !-----aggregate file names ! datnam = trim(filnam) // '.dat' call noextspaces(datnam,datlen) defnam = trim(filnam) // '.def' call noextspaces(defnam,deflen) ! !-----write or read data from nefis files ! if (wrilog) then access = 'u' else access = 'r' endif ! error = crenef (fd_nef, datnam(1:datlen), defnam(1:deflen), & coding, access) if (error /= 0 .and. .not.wrilog) then error = -211 goto 9999 endif if ( error /= 0 ) goto 9999 if (wrilog) then error = putels(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) else jnef=0 123 continue jnef=jnef+1 if (elmnam == elt_names(jnef)) goto 124 goto 123 124 continue buflen = elt_bytes(jnef) ! size single precision integer do inef= 1, elt_dims(1,jnef) buflen = buflen*elt_dims(inef+1,jnef) enddo error = getels(fd_nef,grpnam,elmnam, & uindex,usrord ,buflen,buffr ) if (error /= 0) goto 9999 endif ! !-----error: ! writing: most likely error non existing group, so define it ! reading: error, no error expected ! if ( error /= 0 .and. wrilog ) then ! create elements do 110 lelmnr=1,nelems error = defelm(fd_nef ,elt_names( lelmnr), & elt_types(lelmnr),elt_bytes( lelmnr), & ' ' ,' ', & ' ' ,elt_dims(1,lelmnr), & elt_dims(2,lelmnr) ) ! most likely error, element already exist error = 0 110 continue ! create cells error = defcel(fd_nef,grpnam,nelems,elt_names) if ( error /= 0 ) goto 9999 ! create group on definition file error = defgrp(fd_nef,grpnam,grpnam,1,0,1) if ( error /= 0 ) goto 9999 ! create group on data file error = credat(fd_nef,grpnam,grpnam) if ( error /= 0 ) goto 9999 ! try again to write data error = putels(fd_nef,grpnam,elmnam, & uindex,1 ,buffr ) if ( error /= 0 ) goto 9999 endif ! ! no error when reading elements ! if (error == 0 .and. .not.wrilog) then error = inqelm(fd_nef,elmnam,elmtap,buflen, & elmqta,elmant,elmdas,elmndm,elmdim) if (error /= 0) goto 9999 lelmnr = 0 do 210 nnef = 1,nelems if (elmnam == elt_names(nnef)) then lelmnr = nnef goto 220 endif 210 continue 220 continue if (lelmnr /= 0) goto 9999 ! do 230 inef = 1,elmndm ! !----------compare local and global dimensions, not equal ! => new error number and exit ! if (elmdim(inef) /= elt_dims(1+inef,lelmnr)) then error = -15025 goto 9999 endif 230 continue endif goto 10000 ! 9999 continue if (error /= 0) ierror = neferr(1, errstr) 10000 continue ierror = clsnef( fd_nef ) ! if ( timon ) call timstop ( ithndl ) return end subroutine putget_char1D end module putget_mod
function res = checkfisx(fis) %CHECKFISX Checks the fuzzy inference system properties for legal values. % % See checkfis for syntax and explanation. % % It is completely based on MATLAB's checkfis % with some modifications, so it's compatible % with extendent fuzzy rule structure. % % Compare also code of this function % with code of the original checkfis. % Per Konstantin A. Sidelnikov, 2009. for i = 1 : getfisx(fis, 'numrules') if isempty(fis.rule(i).weight) error('weight of rule %d is empty.', i); end if isempty(fis.rule(i).antecedent) error('antecedent of rule %d is empty.', i); end if isempty(fis.rule(i).consequent) error('consequent of rule %d is empty.', i); end if isempty(fis.rule(i).connection) error('connection of rule %d is empty.', i); end end for i = 1 : getfisx(fis, 'numinputs') if isempty(fis.input(i).name) error('name of input %d is empty.', i); end if isempty(fis.input(i).range) error('range of input %d is empty.', i); end end for i = 1 : getfisx(fis, 'numoutputs') if isempty(fis.output(i).name) error('name of output %d is empty.', i); end if isempty(fis.output(i).range) error('range of output %d is empty.', i); end end res = 1;
function spm_dem_search_trajectory(DEM) % plots visual search in extrinsic and intrinsic coordinates % FORMAT spm_dem_search_trajectory(DEM) % % DEM - {DEM} structures from visual search simulations % % hidden causes and states %========================================================================== % x - hidden states: % o(1) - oculomotor angle % o(2) - oculomotor angle % x(1) - relative amplitude of visual hypothesis 1 % x(2) - relative amplitude of visual hypothesis 2 % x(3) - ... % % v - hidden causes % % g - sensations: % g(1) - oculomotor angle (proprioception - x) % g(2) - oculomotor angle (proprioception - y) % g(3) - retinal input - channel 1 % g(4) - retinal input - channel 2 % g(5) - ... %__________________________________________________________________________ % Copyright (C) 2008 Wellcome Trust Centre for Neuroimaging % Karl Friston % $Id: spm_dem_search_trajectory.m 4595 2011-12-19 13:06:22Z karl $ % Preliminaries %-------------------------------------------------------------------------- clf, global STIM N = length(DEM); S = spm_read_vols(STIM.U); % Stimulus %====================================================================== Dx = STIM.U.dim(1)/2; Dy = STIM.U.dim(2)/2; a = []; q = []; c = []; subplot(2,2,1); hold off image((S + 1)*32), axis image, hold on for i = 1:N % i-th saccade - position %---------------------------------------------------------------------- pU = DEM{i}.pU.x{1}(1:2,:)*16; T = length(pU); % eye movements in extrinsic coordinates %====================================================================== subplot(2,2,1) plot(pU(2,:) + Dy,pU(1,:) + Dx,'r-','LineWidth',2) plot(pU(2,T) + Dy,pU(1,T) + Dx,'ro','MarkerSize',16) % Free energy %====================================================================== F(i) = DEM{i}.F; end % Free energy %-------------------------------------------------------------------------- subplot(2,2,2) plot(F) title('Free-energy','FontSize',16) xlabel('saccade') axis square
function colors = custom_colors(startrgb, endrgb, k) % Define cell array of custom colors in a specific range of the spectrum, % to be used in visualization. % % :Usage: % :: % colors = custom_colors(startrgb, endrgb, k [length]) % .. % Author and copyright information: % % Copyright (C) 2016 Tor Wager % % This program is free software: you can redistribute it and/or modify % it under the terms of the GNU General Public License as published by % the Free Software Foundation, either version 3 of the License, or % (at your option) any later version. % % This program is distributed in the hope that it will be useful, % but WITHOUT ANY WARRANTY; without even the implied warranty of % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the % GNU General Public License for more details. % % You should have received a copy of the GNU General Public License % along with this program. If not, see <http://www.gnu.org/licenses/>. % .. % % :Inputs: % % **startrgb** % 3-color RGB triplet, values 0-1, for starting color in range % % **endrgb** % 3-color RGB triplet, values 0-1, for starting color in range % % **k** % how many colors you want, from 1 - 64. Colors will be sampled % evenly along the range you specify % % :Outputs: % % **colors_cell** % Cell array of k colors % % :Examples: % :: % % % give examples of code here % param1 = abc(); % param2 = xyz(); % [out1,out2] = func_call(param1, param2) % % :See also: % colormap_tor, bucknerlab_colors, scn_standard_colors % % define 64-color map in range you specify % --------------------------------------------- cm = colormap_tor(startrgb, endrgb); % custom colormap % turn it into a cell array and sample only the number desired % --------------------------------------------- whcolors = round(linspace(1, size(cm, 1), k)); colors = mat2cell(cm(whcolors, :), ones(k, 1)); end % function
[STATEMENT] lemma (in Corps) ring_n_distinct_prime_divisors:"distinct_pds K n P \<Longrightarrow> Ring (Sr K {x. x\<in>carrier K \<and> (\<forall>j\<le> n. 0 \<le> ((\<nu>\<^bsub> K (P j)\<^esub>) x))})" [PROOF STATE] proof (prove) goal (1 subgoal): 1. distinct_pds K n P \<Longrightarrow> Ring (Sr K {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply (simp add:distinct_pds_def) [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<forall>j\<le>n. P j \<in> Pds) \<and> (\<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m) \<Longrightarrow> Ring (Sr K {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply (erule conjE) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m\<rbrakk> \<Longrightarrow> Ring (Sr K {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply (cut_tac field_is_ring) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> Ring (Sr K {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply (rule Ring.Sr_ring, assumption+) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> sr K {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} [PROOF STEP] apply (subst sr_def) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<subseteq> carrier K \<and> 1\<^sub>r \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> (\<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply (rule conjI) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<subseteq> carrier K 2. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> 1\<^sub>r \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> (\<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply (rule subsetI) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; x \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}\<rbrakk> \<Longrightarrow> x \<in> carrier K 2. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> 1\<^sub>r \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> (\<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply simp [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> 1\<^sub>r \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> (\<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}) [PROOF STEP] apply (rule conjI) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> 1\<^sub>r \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} 2. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> \<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} [PROOF STEP] apply (simp add:Ring.ring_one) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) 1\<^sub>r 2. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> \<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} [PROOF STEP] apply (rule allI, rule impI) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>j. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; j \<le> n\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) 1\<^sub>r 2. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> \<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} [PROOF STEP] apply (cut_tac P = "P j" in representative_of_pd_valuation, simp, simp add:value_of_one) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K\<rbrakk> \<Longrightarrow> \<forall>x\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. \<forall>y\<in>{x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}. x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} [PROOF STEP] apply (rule ballI)+ [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; x \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}; y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x}\<rbrakk> \<Longrightarrow> x \<plusminus> -\<^sub>a y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} \<and> x \<cdot>\<^sub>r y \<in> {x \<in> carrier K. \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x} [PROOF STEP] apply simp [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; x \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x); y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y)\<rbrakk> \<Longrightarrow> x \<plusminus> -\<^sub>a y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)) \<and> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (frule Ring.ring_is_ag[of "K"]) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; x \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x); y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y); aGroup K\<rbrakk> \<Longrightarrow> x \<plusminus> -\<^sub>a y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)) \<and> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (erule conjE)+ [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y\<rbrakk> \<Longrightarrow> x \<plusminus> -\<^sub>a y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)) \<and> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (frule_tac x = y in aGroup.ag_mOp_closed[of "K"], assumption+) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<plusminus> -\<^sub>a y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)) \<and> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (frule_tac x = x and y = "-\<^sub>a y" in aGroup.ag_pOp_closed[of "K"], assumption+) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<plusminus> -\<^sub>a y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)) \<and> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply simp [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)) \<and> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (rule conjI) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 2. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (rule allI, rule impI) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x y j. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 2. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (rotate_tac -4, frule_tac a = j in forall_spec, assumption, rotate_tac -3, drule_tac a = j in forall_spec, assumption) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x y j. \<lbrakk>y \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n; \<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 2. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (cut_tac P = "P j" in representative_of_pd_valuation, simp) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x y j. \<lbrakk>y \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n; \<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; valuation K (\<nu>\<^bsub>K P j\<^esub>)\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 2. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (frule_tac v = "\<nu>\<^bsub>K (P j)\<^esub>" and x = x and y = "-\<^sub>a y" in amin_le_plus, assumption+) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x y j. \<lbrakk>y \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n; \<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; valuation K (\<nu>\<^bsub>K P j\<^esub>); amin ((\<nu>\<^bsub>K P j\<^esub>) x) ((\<nu>\<^bsub>K P j\<^esub>) (-\<^sub>a y)) \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 2. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (simp add:val_minus_eq) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x y j. \<lbrakk>y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n; \<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; valuation K (\<nu>\<^bsub>K P j\<^esub>); amin ((\<nu>\<^bsub>K P j\<^esub>) x) ((\<nu>\<^bsub>K P j\<^esub>) y) \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 2. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (frule_tac x = "(\<nu>\<^bsub>K (P j)\<^esub>) x" and y = "(\<nu>\<^bsub>K (P j)\<^esub>) y" in amin_ge1[of "0"]) [PROOF STATE] proof (prove) goal (3 subgoals): 1. \<And>x y j. \<lbrakk>y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n; \<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; valuation K (\<nu>\<^bsub>K P j\<^esub>); amin ((\<nu>\<^bsub>K P j\<^esub>) x) ((\<nu>\<^bsub>K P j\<^esub>) y) \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y)\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y 2. \<And>x y j. \<lbrakk>y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n; \<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; valuation K (\<nu>\<^bsub>K P j\<^esub>); amin ((\<nu>\<^bsub>K P j\<^esub>) x) ((\<nu>\<^bsub>K P j\<^esub>) y) \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y); 0 \<le> amin ((\<nu>\<^bsub>K P j\<^esub>) x) ((\<nu>\<^bsub>K P j\<^esub>) y)\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 3. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply simp [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>x y j. \<lbrakk>y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K; j \<le> n; \<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; valuation K (\<nu>\<^bsub>K P j\<^esub>); amin ((\<nu>\<^bsub>K P j\<^esub>) x) ((\<nu>\<^bsub>K P j\<^esub>) y) \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y); 0 \<le> amin ((\<nu>\<^bsub>K P j\<^esub>) x) ((\<nu>\<^bsub>K P j\<^esub>) y)\<rbrakk> \<Longrightarrow> 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<plusminus> -\<^sub>a y) 2. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (rule_tac j = "amin ((\<nu>\<^bsub>K (P j)\<^esub>) x) ((\<nu>\<^bsub>K (P j)\<^esub>) y)" and k = "(\<nu>\<^bsub>K (P j)\<^esub>) (x \<plusminus> -\<^sub>a y)" in ale_trans[of "0"], assumption+) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> x \<cdot>\<^sub>r y \<in> carrier K \<and> (\<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y)) [PROOF STEP] apply (simp add:Ring.ring_tOp_closed) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x y. \<lbrakk>\<forall>j\<le>n. P j \<in> Pds; \<forall>l\<le>n. \<forall>m\<le>n. l \<noteq> m \<longrightarrow> P l \<noteq> P m; Ring K; aGroup K; x \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) x; y \<in> carrier K; \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) y; -\<^sub>a y \<in> carrier K; x \<plusminus> -\<^sub>a y \<in> carrier K\<rbrakk> \<Longrightarrow> \<forall>j\<le>n. 0 \<le> (\<nu>\<^bsub>K P j\<^esub>) (x \<cdot>\<^sub>r y) [PROOF STEP] apply (rule allI, rule impI, cut_tac P = "P j" in representative_of_pd_valuation, simp, subst val_t2p [where v="\<nu>\<^bsub>K P j\<^esub>"], assumption+, rule aadd_two_pos, simp+) [PROOF STATE] proof (prove) goal: No subgoals! [PROOF STEP] done
[STATEMENT] lemma fv_Cons[simp]: "fv (x # xs) = fv x \<union> fv xs" [PROOF STATE] proof (prove) goal (1 subgoal): 1. fv (x # xs) = fv x \<union> fv xs [PROOF STEP] by (auto simp add: fv_def supp_Cons)
[STATEMENT] lemma undefg_equiv: "(\<alpha>\<noteq>undefg) = (\<exists>g. \<alpha>=Agame g)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<alpha> \<noteq> undefg) = (\<exists>g. \<alpha> = Agame g) [PROOF STEP] by simp
[STATEMENT] lemma invar_delete: "invar t \<Longrightarrow> invar (delete xs t)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. invar t \<Longrightarrow> invar (Trie_Map.delete xs t) [PROOF STEP] apply(induction xs t rule: delete.induct) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>b m. invar (trie_map.Nd b m) \<Longrightarrow> invar (Trie_Map.delete [] (trie_map.Nd b m)) 2. \<And>x xs b m. \<lbrakk>\<And>x2. \<lbrakk>lookup m x = Some x2; invar x2\<rbrakk> \<Longrightarrow> invar (Trie_Map.delete xs x2); invar (trie_map.Nd b m)\<rbrakk> \<Longrightarrow> invar (Trie_Map.delete (x # xs) (trie_map.Nd b m)) [PROOF STEP] apply(auto simp: M.map_specs split: option.split) [PROOF STATE] proof (prove) goal: No subgoals! [PROOF STEP] done
[STATEMENT] lemma "(a::int) * b + a * c = a * (b + c)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. a * b + a * c = a * (b + c) [PROOF STEP] by ring
function ShotProfileLSEWEM(m::Array{AbstractString,1},m0::Array{AbstractString,1},d::Array{AbstractString,1},param=Dict()) # least squares shot profile wave equation migration of isotropic 3C data. cost = get(param,"cost","cost.txt") # cost output text file precon = get(param,"precon",false) # flag for preconditioning by smoothing the image wd = join(["tmp_LSM_wd_",string(int(rand()*100000))]) CalculateSampling(d[1],wd,param) param["wd"] = wd param["tmute"] = 0.; param["vmute"] = 999999.; if (precon == true) param["operators"] = [ApplyDataWeights SeisMute ShotProfileEWEM SeisSmoothGathers] else param["operators"] = [ApplyDataWeights SeisMute ShotProfileEWEM] end ConjugateGradients(m,m0,d,cost,param) SeisRemove(wd) end
# Semi-Monocoque Theory ```python from pint import UnitRegistry import sympy import networkx as nx import numpy as np import matplotlib.pyplot as plt import sys %matplotlib inline from IPython.display import display ``` Import **Section** class, which contains all calculations ```python from Section import Section ``` Initialization of **sympy** symbolic tool and **pint** for dimension analysis (not really implemented rn as not directly compatible with sympy) ```python ureg = UnitRegistry() sympy.init_printing() ``` Define **sympy** parameters used for geometric description of sections ```python A, A0, t, t0, a, b, h, L = sympy.symbols('A A_0 t t_0 a b h L', positive=True) ``` We also define numerical values for each **symbol** in order to plot scaled section and perform calculations ```python values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \ (b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter)] datav = [(v[0],v[1].magnitude) for v in values] ``` # Triangular section Define graph describing the section: 1) **stringers** are **nodes** with parameters: - **x** coordinate - **y** coordinate - **Area** 2) **panels** are **oriented edges** with parameters: - **thickness** - **lenght** which is automatically calculated ```python stringers = {1:[(sympy.Integer(0),h),A], 2:[(sympy.Integer(0),sympy.Integer(0)),A], 3:[(a,sympy.Integer(0)),A]} panels = {(1,2):t, (2,3):t, (3,1):t} ``` Define section and perform first calculations ```python S1 = Section(stringers, panels) ``` ```python S1.cycles ``` ## Plot of **S1** section in original reference frame Define a dictionary of coordinates used by **Networkx** to plot section as a Directed graph. Note that arrows are actually just thicker stubs ```python start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } ``` ```python plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16); ``` Expression of **Inertial properties** wrt Center of Gravity in with original rotation ```python S1.Ixx0, S1.Iyy0, S1.Ixy0, S1.α0 ``` ## Plot of **S1** section in inertial reference Frame Section is plotted wrt **center of gravity** and rotated (if necessary) so that *x* and *y* are principal axes. **Center of Gravity** and **Shear Center** are drawn ```python positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } ``` ```python x_ct, y_ct = S1.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16); ``` Expression of **inertial properties** in *principal reference frame* ```python sympy.simplify(S1.Ixx), sympy.simplify(S1.Iyy), sympy.simplify(S1.Ixy), sympy.simplify(S1.θ) ``` ## **Shear center** expression Expressions can be messy, so we evaluate them to numerical values ```python sympy.N(S1.ct.subs(datav)) ``` ## Analisys of Loads We define some symbols ```python Tx, Ty, Nz, Mx, My, Mz, F, ry, ry, mz = sympy.symbols('T_x T_y N_z M_x M_y M_z F r_y r_x m_z') ``` ```python S1.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=0) #S1.compute_stringer_actions() #S1.compute_panel_fluxes(); ``` **Axial Loads** ```python #S1.N ``` **Panel Fluxes** ```python #S1.q ``` **Example 2**: _twisting moment_ in **z** direction ```python S1.set_loads(_Tx=0, _Ty=0, _Nz=0, _Mx=0, _My=0, _Mz=Mz) S1.compute_stringer_actions() S1.compute_panel_fluxes(); ``` **Axial Loads** ```python S1.N ``` **Panel Fluxes** evaluated to numerical values ```python {k:sympy.N(S1.q[k].subs(datav)) for k in S1.q } ``` ## Torsional moment of Inertia ```python S1.compute_Jt() ``` ```python sympy.N(S1.Jt.subs(datav)) ``` ```python ```
#!/usr/bin/env python # coding: utf-8 # In[3]: import urllib.request import xarray as xr import numpy as np from datetime import datetime, date, time, timedelta import urllib import requests import json import smtplib # In[4]: #not useing this right now but consider putting instance here def get_key(file_name): myvars = {} with open(file_name) as myfile: for line in myfile: name, var = line.partition("=")[::2] myvars[name.strip()] = str(var).rstrip() return myvars file_key = "C:/Users/gentemann/Google Drive/f_drive/secret_keys/saildrone.txt" saildrone_key = get_key(file_key) file_key = "C:/Users/gentemann/Google Drive/f_drive/secret_keys/gmail_login.txt" email_key = get_key(file_key) # ## Use restful API to get USV locations endtime = datetime.today().strftime('%Y-%m-%d') starttime = (datetime.today() + timedelta(days=-5)).strftime('%Y-%m-%d') #all_usv = ['1041','1033','1034','1035','1036','1037'] all_usv = ['1034','1035','1036','1037'] #get token payload={'key': saildrone_key['key'], 'secret':saildrone_key['secret']} headers={'Content-Type':'application/json', 'Accept':'application/json'} url = 'https://developer-mission.saildrone.com/v1/auth' res = requests.post(url, json=payload, headers=headers) json_data = json.loads(res.text) names=[] inum_usv = len(all_usv) ilen = 500 #len(usv_data['data']) usv_lats = np.empty((ilen,inum_usv))*np.nan usv_lons = np.empty((ilen,inum_usv))*np.nan usv_time = np.empty((ilen,inum_usv))*np.nan for iusv in range(inum_usv): str_usv = all_usv[iusv] url = 'https://developer-mission.saildrone.com/v1/timeseries/'+str_usv+'?data_set=vehicle&interval=5&start_date='+starttime+'&end_date='+endtime+'&order_by=desc&limit=500&offset=0' payload = {} headers = {'Accept':'application/json','authorization':json_data['token']} res = requests.get(url, json=payload, headers=headers) usv_data = json.loads(res.text) #print(usv_data.data) for i in range(ilen): usv_lons[i,iusv]=usv_data['data'][i]['gps_lng'] usv_lats[i,iusv]=usv_data['data'][i]['gps_lat'] usv_time[i,iusv]=usv_data['data'][i]['gps_time'] names.append(str_usv) xlons = xr.DataArray(usv_lons,coords={'time':usv_time[:,0],'trajectory':names},dims=('time','trajectory')) xlats = xr.DataArray(usv_lats,coords={'time':usv_time[:,0],'trajectory':names},dims=('time','trajectory')) ds_usv = xr.Dataset({'lon': xlons,'lat':xlats}) # In[5]: msg_body=[] for i in range(1): for j in range(inum_usv): dt = datetime.fromtimestamp(ds_usv.time[i].data) s = dt.strftime('%Y-%m-%d %H:%M:%S') msg = all_usv[j]+" "+s+" lon :{0:5.2f}, lat :{1:5.2f} ".format(ds_usv.lon[i,j].data,ds_usv.lat[i,j].data) msg_body.append(msg) # In[6]: try: server = smtplib.SMTP('smtp.gmail.com', 587) server.ehlo() except: print ('Something went wrong...') sent_from = email_key['key'] to = ['cgentemann@gmail.com', 'andy.chiodi@noaa.gov'] subject = 'Daily Saildrone Position Update' body = msg_body email_text = """\ From: %s To: %s Subject: %s %s """ % (sent_from, ", ".join(to), subject, body) try: server = smtplib.SMTP_SSL('smtp.gmail.com', 465) server.ehlo() server.login(email_key['key'], email_key['secret']) server.sendmail(sent_from, to, email_text) server.close() print('Email sent!') except: print('Something went wrong...') # In[ ]:
""" pyqtgraphPlotHelpers.py Routines to help use pyqtgraph and make cleaner plots as well as get plots read for publication. Intially copied from PlotHelpers.py for matplotlib. Modified to allow us to use a list of axes, and operate on all of those, or to use just one axis if that's all that is passed. Therefore, the first argument to these calls can either be a pyqtgraph axis object, or a list of axes objects. 2/10/2012 pbm. Created by Paul Manis on 2010-03-09. Copyright 2010-2019 Paul Manis Distributed under MIT/X11 license. See license.txt for more infofmation. """ from __future__ import print_function from __future__ import absolute_import import string stdFont = 'Arial' import scipy.stats import numpy as np import pyqtgraph as pg from PyQt5 import QtGui from . import talbotetalticks as ticks # logical tick formatting... """ Basic functions: """ def nice_plot(plotlist, spines=['left', 'bottom'], position=10, direction='inward', axesoff = False): """ Adjust a plot so that it looks nicer than the default matplotlib plot Also allow quick access to things we like to do for publication plots, including: using a calbar instead of an axes: calbar = [x0, y0, xs, ys] inserting a reference line (grey, 3pt dashed, 0.5pt, at refline = y position) Parameters ---------- plotlist : list a plot handle or list of plot handles to which the "niceplot" will be applied spines : list a list of which axes should have spines. Not relevant for pyqtgraph position : int not relevant for pyqtgraph direction : string need to implement for pyqtgraph axesoff : boolean flag that forces plots to turn axes off Returns ------- Nothing """ if not isinstance(plotlist, list): plotlist = [plotlist] for pl in plotlist: if axesoff is True: pl.hideAxis('bottom') pl.hideAxis('left') def noaxes(plotlist, whichaxes = 'xy'): """ take away all the axis ticks and the lines Parameters ---------- plotlist : list list of plot handles whichaxes : string string describing which axes to remove: 'x', 'y', or 'xy' for both """ if not isinstance(plotlist, list): plotlist = [plotlist] for pl in plotlist: if 'x' in whichaxes: pl.hideAxis('bottom') if 'y' in whichaxes: pl.hideAxis('left') def setY(ax1, ax2): """ Set the Y axis of all the plots in ax2 to be like ax1 Parameters ---------- ax1 : pyqtgraph plot instance ax2 : list list of target plots that will have the axes properties of ax1 """ if type(ax1) is list: print('PlotHelpers: cannot use list as source to set Y axis') return if type(ax2) is not list: ax2 = [ax2] y = ax1.getAxis('left') refy = y.range # return the current range for ax in ax2: ax.setRange(refy) def setX(ax1, ax2): """ Set the X axis of all the plots in ax2 to be like ax1 Parameters ---------- ax1 : pyqtgraph plot instance ax2 : list list of target plots that will have the axes properties of ax1 """ if type(ax1) is list: print('PlotHelpers: cannot use list as source to set X axis') return if type(ax2) is not list: ax2 = [ax2] x = ax1.getAxis('bottom') refx = x.range for ax in ax2: ax.setrange(refx) def labelPanels(axl, axlist=None, font='Arial', fontsize=18, weight = 'normal'): """ Label the panels like a specific panel Parameters ---------- axl : dict or list axlist : list, optional list of labels to use for the axes, defaults to None font : str, optional Font to use for the labels, defaults to Arial fontsize : int, optional Font size in points for the labels, defaults to 18 weight : str, optional Font weight to use, defaults to 'normal' """ if type(axl) is dict: axt = [axl[x] for x in axl] axlist = axl.keys() axl = axt if type(axl) is not list: axl = [axl] if axlist is None: axlist = string.uppercase(1,len(axl)) # assume we wish to go in sequence for i, ax in enumerate(axl): labelText = pg.TextItem(axlist[i]) y = ax.getAxis('left').range x = ax.getAxis('bottom').range ax.addItem(labelText) labelText.setPos(x[0], y[1]) def listAxes(axd): """ make a list of the axes from the dictionary of axes Parameters ---------- axd : dict a dict of axes, whose values are returned in a list Returns ------- list : a list of the axes """ if type(axd) is not dict: if type(axd) is list: return axd else: print('listAxes expects dictionary or list; type not known (fix the code)') raise axl = [axd[x] for x in axd] return axl def cleanAxes(axl): """ """ if type(axl) is not list: axl = [axl] # does nothing at the moment, as axes are already "clean" # for ax in axl: # # update_font(ax) def formatTicks(axl, axis='xy', fmt='%d', font='Arial'): """ Convert tick labels to intergers to do just one axis, set axis = 'x' or 'y' control the format with the formatting string """ if type(axl) is not list: axl = [axl] def autoFormatTicks(axl, axis='xy', font='Arial'): if type(axl) is not list: axl = [axl] for ax in axl: if 'x' in axis: b = ax.getAxis('bottom') x0 = b.range # setFormatter(ax, x0, x1, axis = 'x') if 'y' in axis: l = ax.getAxis('left') y0= l.range # setFormatter(ax, y0, y1, axis = 'y') def setFormatter(ax, x0, x1, axis='x'): datarange = np.abs(x0-x1) mdata = np.ceil(np.log10(datarange)) # if mdata > 0 and mdata <= 4: # majorFormatter = FormatStrFormatter('%d') # elif mdata > 4: # majorFormatter = FormatStrFormatter('%e') # elif mdata <= 0 and mdata > -1: # majorFormatter = FormatStrFormatter('%5.1f') # elif mdata < -1 and mdata > -3: # majorFormatatter = FormatStrFormatter('%6.3f') # else: # majorFormatter = FormatStrFormatter('%e') # if axis == 'x': # ax.xaxis.set_major_formatter(majorFormatter) # else: # ax.yaxis.set_major_formatter(majorFormatter) def update_font(axl, size=6, font=stdFont): pass # if type(axl) is not list: # axl = [axl] # fontProperties = {'family':'sans-serif','sans-serif':[font], # 'weight' : 'normal', 'size' : size} # for ax in axl: # for tick in ax.xaxis.get_major_ticks(): # tick.label1.set_family('sans-serif') # tick.label1.set_fontname(stdFont) # tick.label1.set_size(size) # # for tick in ax.yaxis.get_major_ticks(): # tick.label1.set_family('sans-serif') # tick.label1.set_fontname(stdFont) # tick.label1.set_size(size) # ax.set_xticklabels(ax.get_xticks(), fontProperties) # ax.set_yticklabels(ax.get_yticks(), fontProperties) # ax.xaxis.set_smart_bounds(True) # ax.yaxis.set_smart_bounds(True) # ax.tick_params(axis = 'both', labelsize = 9) def lockPlot(axl, lims, ticks=None): """ This routine forces the plot of invisible data to force the axes to take certain limits and to force the tick marks to appear. call with the axis and lims = [x0, x1, y0, y1] """ if type(axl) is not list: axl = [axl] plist = [] for ax in axl: y = ax.getAxis('left') x = ax.getAxis('bottom') x.setRange(lims[0], lims[1]) y.setRange(lims[2], lims[3]) def calbar(plotlist, calbar=None, axesoff=True, orient='left', unitNames=None): """ draw a calibration bar and label it up. The calibration bar is defined as: [x0, y0, xlen, ylen] Parameters ---------- plotlist : list a plot item or a list of plot items for which a calbar will be applied calbar : list, optional a list with 4 elements, describing the calibration bar [xposition, yposition, xlength, ylength] in units of the data inthe plot defaults to None axesoff : boolean, optional Set true to turn off the standard axes, defaults to True orient : text, optional 'left': put vertical part of the bar on the left 'right': put the vertical part of the bar on the right defaults to 'left' unitnames: str, optional a dictionary with the names of the units to append to the calibration bar lengths. Example: {'x': 'ms', 'y': 'nA'} defaults to None Returns ------- Nothing """ if type(plotlist) is not list: plotlist = [plotlist] for pl in plotlist: if axesoff is True: noaxes(pl) Vfmt = '%.0f' if calbar[2] < 1.0: Vfmt = '%.1f' Hfmt = '%.0f' if calbar[3] < 1.0: Hfmt = '%.1f' if unitNames is not None: Vfmt = Vfmt + ' ' + unitNames['x'] Hfmt = Hfmt + ' ' + unitNames['y'] Vtxt = pg.TextItem(Vfmt % calbar[2], anchor=(0.5, 0.5), color=pg.mkColor('k')) Htxt = pg.TextItem(Hfmt % calbar[3], anchor=(0.5, 0.5), color=pg.mkColor('k')) # print pl if calbar is not None: if orient == 'left': # vertical part is on the left pl.plot([calbar[0], calbar[0], calbar[0]+calbar[2]], [calbar[1]+calbar[3], calbar[1], calbar[1]], pen=pg.mkPen('k'), linestyle = '-', linewidth = 1.5) ht = Htxt.setPos(calbar[0]+0.05*calbar[2], calbar[1]+0.5*calbar[3]) elif orient == 'right': # vertical part goes on the right pl.plot([calbar[0] + calbar[2], calbar[0]+calbar[2], calbar[0]], [calbar[1]+calbar[3], calbar[1], calbar[1]], pen=pg.mkPen('k'), linestyle = '-', linewidth = 1.5) ht = Htxt.setPos(calbar[0]+calbar[2]-0.05*calbar[2], calbar[1]+0.5*calbar[3]) else: print("PlotHelpers.py: I did not understand orientation: %s" % (orient)) print("plotting as if set to left... ") pl.plot([calbar[0], calbar[0], calbar[0]+calbar[2]], [calbar[1]+calbar[3], calbar[1], calbar[1]], pen=pg.mkPen('k'), linestyle = '-', linewidth = 1.5) ht = Htxt.setPos(calbar[0]+0.05*calbar[2], calbar[1]+0.5*calbar[3]) Htxt.setText(Hfmt % calbar[3]) xc = float(calbar[0]+calbar[2]*0.5) # always centered, below the line yc = float(calbar[1]-0.1*calbar[3]) vt = Vtxt.setPos(xc, yc) Vtxt.setText(Vfmt % calbar[2]) pl.addItem(Htxt) pl.addItem(Vtxt) def refline(axl, refline=None, color=[64, 64, 64], linestyle='--' ,linewidth=0.5, orient='horizontal'): """ Draw a reference line at a particular level of the data on the y axis Parameters ---------- axl : list axis handle or list of axis handles refline : float, optional the position of the reference line, defaults to None color : list, optional the RGB color list for the line, in format [r,g,b], defaults to [64, 64, 64] (faint grey line) linestyle : str, optional defines the linestyle to be used: -- for dash, . for doted, - for solid, -. for dash-dot, -.. for -.., etc. defaults to '--' (dashed) linewidth : float, optional width of the line, defaults to 0.5 """ if type(axl) is not list: axl = [axl] if linestyle == '--': style = pg.QtCore.Qt.DashLine elif linestyle == '.': style = pg.QtCore.Qt.DotLine elif linestyle == '-': style = pg.QtCore.Qt.SolidLine elif linestyle == '-.': style = pg.QtCore.Qt.DsahDotLine elif linestyle == '-..': style = pg.QtCore.Qt.DashDotDotLine else: style = pg.QtCore.Qt.SolidLine # default is solid if orient == 'horizontal': for ax in axl: if refline is not None: x = ax.getAxis('bottom') xlims = x.range ax.plot(xlims, [refline, refline], pen=pg.mkPen(color, width=linewidth, style=style)) if orient == 'vertical': for ax in axl: if refline is not None: y = ax.getAxis('left') ylims = y.range ax.plot([refline, refline], [ylims[0]+0.5, ylims[1]-0.5], pen=pg.mkPen(color, width=linewidth, style=style)) def tickStrings(values, scale=1, spacing=None, tickPlacesAdd=1): """Return the strings that should be placed next to ticks. This method is called when redrawing the axis and is a good method to override in subclasses. Parameters ---------- values : array or list An array or list of tick values scale : float, optional a scaling factor (see below), defaults to 1 spacing : float, optional spaceing between ticks (this is required since, in some instances, there may be only one tick and thus no other way to determine the tick spacing). Defaults to None tickPlacesToAdd : int, optional the number of decimal places to add to the ticks, default is 1 Returns ------- list : a list containing the tick strings The scale argument is used when the axis label is displaying units which may have an SI scaling prefix. When determining the text to display, use value*scale to correctly account for this prefix. For example, if the axis label's units are set to 'V', then a tick value of 0.001 might be accompanied by a scale value of 1000. This indicates that the label is displaying 'mV', and thus the tick should display 0.001 * 1000 = 1. Copied rom pyqtgraph; we needed it here. """ if spacing is None: spacing = np.mean(np.diff(values)) places = max(0, np.ceil(-np.log10(spacing*scale))) + tickPlacesAdd strings = [] for v in values: vs = v * scale if abs(vs) < .001 or abs(vs) >= 10000: vstr = "%g" % vs else: vstr = ("%%0.%df" % places) % vs strings.append(vstr) return strings def crossAxes(axl, xyzero=[0., 0.], limits=[None, None, None, None], **kwds): """ Make the plot(s) have crossed axes at the data points set by xyzero, and optionally set axes limits Parameters ---------- axl : pyqtgraph plot/axes instance or list the plot to modify xyzero : list A 2-element list for the placement of x=0 and y=0, defaults to [0., 0.] limits : list A 4-element list with the min and max limits of the axes, defaults to all None **kwds : keyword arguments to pass to make_crossedAxes """ if type(axl) is not list: axl = [axl] for ax in axl: make_crossedAxes(ax, xyzero, limits, **kwds) def make_crossedAxes(ax, xyzero=[0., 0.], limits=[None, None, None, None], ndec=3, density=(1.0, 1.0), tickl = 0.0125, insideMargin=0.05, pointSize=12, tickPlacesAdd=(0,0)): """ Parameters ---------- axl : pyqtgraph plot/axes instance or list the plot to modify xyzero : list A 2-element list for the placement of x=0 and y=0, defaults to [0., 0.] limits : list A 4-element list with the min and max limits of the axes, defaults to all None ndec : int Number of decimals (would be passed to talbotTicks if that was being called) density : tuple tick density (for talbotTicks), defaults to (1.0, 1.0) tickl : float Tick length, defaults to 0.0125 insideMargin : float Inside margin space for plot, defaults to 0.05 (5%) pointSize : int point size for tick text, defaults to 12 tickPlacesAdd : tuple number of decimal places to add in tickstrings for the ticks, pair for x and y axes, defaults to (0,0) Returns ------- Nothing """ # get axis limits aleft = ax.getAxis('left') abottom = ax.getAxis('bottom') aleft.setPos(pg.Point(3., 0.)) yRange = aleft.range xRange = abottom.range hl = pg.InfiniteLine(pos=xyzero[0], angle=90, pen=pg.mkPen('k')) ax.addItem(hl) vl = pg.InfiniteLine(pos=xyzero[1], angle=0, pen=pg.mkPen('k')) ax.addItem(vl) ax.hideAxis('bottom') ax.hideAxis('left') # now create substitue tick marks and labels, using Talbot et al algorithm xr = np.diff(xRange)[0] yr = np.diff(yRange)[0] xmin, xmax = (np.min(xRange) - xr * insideMargin, np.max(xRange) + xr * insideMargin) ymin, ymax = (np.min(yRange) - yr * insideMargin, np.max(yRange) + yr * insideMargin) xtick = ticks.Extended(density=density[0], figure=None, range=(xmin, xmax), axis='x') ytick = ticks.Extended(density=density[1], figure=None, range=(ymin, ymax), axis='y') xt = xtick() yt = ytick() ytk = yr*tickl xtk = xr*tickl y0 = xyzero[1] x0 = xyzero[0] tsx = tickStrings(xt, tickPlacesAdd=tickPlacesAdd[0]) tsy = tickStrings(yt, tickPlacesAdd=tickPlacesAdd[1]) for i, x in enumerate(xt): t = pg.PlotDataItem(x=x*np.ones(2), y=[y0-ytk, y0+ytk], pen=pg.mkPen('k')) ax.addItem(t) # tick mark # put text in only if it does not overlap the opposite line if x == y0: continue txt = pg.TextItem(tsx[i], anchor=(0.5, 0), color=pg.mkColor('k')) #, size='10pt') txt.setFont(pg.QtGui.QFont('Arial', pointSize=pointSize)) txt.setPos(pg.Point(x, y0-ytk)) ax.addItem(txt) #, pos=pg.Point(x, y0-ytk)) for i, y in enumerate(yt): t = pg.PlotDataItem(x=np.array([x0-xtk, x0+xtk]), y=np.ones(2)*y, pen=pg.mkPen('k')) ax.addItem(t) if y == x0: continue txt = pg.TextItem(tsy[i], anchor=(1, 0.5), color=pg.mkColor('k')) # , size='10pt') txt.setFont(pg.QtGui.QFont('Arial', pointSize=pointSize)) txt.setPos(pg.Point(x0-xtk, y)) ax.addItem(txt) #, pos=pg.Point(x, y0-ytk)) class polarPlot(): """ Create a polar plot, as a PlotItem for pyqtgraph. """ def __init__(self, plot=None): """ Instantiate a plot as a polar plot Parameters --------- plot : pyqtgraph plotItem the plot that will be converted to a polar plot, defaults to None if None, then a new PlotItem will be created, accessible as polarPlot.plotItem """ if plot is None: self.plotItem = pg.PlotItem() # create a plot item for the plot else: self.plotItem = plot self.plotItem.setAspectLocked() self.plotItem.hideAxis('bottom') self.plotItem.hideAxis('left') self.gridSet = False self.data = None self.rMax = None def setAxes(self, steps=4, rMax=None, makeGrid=True): """ Make the polar plot axes Parameters ---------- steps : int, optional The number of radial steps for the grid, defaults to 4 rMax : float, optional The maximum radius of the plot, defaults to None (the rMax is 1) makeGrid : boolean, optional Whether the grid will actually be plotted or not, defaults to True """ if makeGrid is False or self.gridSet: return if rMax is None: if self.data is None: rMax = 1.0 else: rMax = np.max(self.data['y']) self.rMax = rMax # Add radial grid lines (theta markers) gridPen = pg.mkPen(width=0.55, color='k', style=pg.QtCore.Qt.DotLine) ringPen = pg.mkPen(width=0.75, color='k', style=pg.QtCore.Qt.SolidLine) for th in np.linspace(0., np.pi*2, 8, endpoint=False): rx = np.cos(th)*rMax ry = np.sin(th)*rMax self.plotItem.plot(x=[0, rx], y=[0., ry], pen=gridPen) ang = th*360./(np.pi*2) # anchor is odd: 0,0 is upper left corner, 1,1 is lower right corner if ang < 90.: x=0. y=0.5 elif ang == 90.: x=0.5 y=1 elif ang < 180: x=1.0 y=0.5 elif ang == 180.: x=1 y=0.5 elif ang < 270: x=1 y=0 elif ang== 270.: x=0.5 y=0 elif ang < 360: x=0 y=0 ti = pg.TextItem("%d" % (int(ang)), color=pg.mkColor('k'), anchor=(x,y)) self.plotItem.addItem(ti) ti.setPos(rx, ry) # add polar grid lines (r) for gr in np.linspace(rMax/steps, rMax, steps): circle = pg.QtGui.QGraphicsEllipseItem(-gr, -gr, gr*2, gr*2) if gr < rMax: circle.setPen(gridPen) else: circle.setPen(ringPen) self.plotItem.addItem(circle) ti = pg.TextItem("%d" % (int(gr)), color=pg.mkColor('k'), anchor=(1, 1)) ti.setPos(gr, 0.) self.plotItem.addItem(ti) self.gridSet = True def plot(self, r, theta, vectors=False, arrowhead=True, normalize=False, sort=False, **kwds): """ plot puts the data into a polar plot. the plot will be converted to a polar graph Parameters ---------- r : list or numpy array a list or array of radii theta : list or numpy array a list or array of angles (in radians) corresponding to the values in r vectors : boolean, optional vectors True means that plot is composed of vectors to each point radiating from the origin, defaults to False arrowhead : boolean, optional arrowhead True plots arrowheads at the end of the vectors, defaults to True normalize : boolean, optional normalize forces the plot to be scaled to the max values in r, defaults to False sort : boolean, optional causes data r, theta to be sorted by theta, defaults to False **kwds are passed to the data plot call. """ # sort r, theta by r rs = np.array(r) thetas = np.array(theta) if sort: indx = np.argsort(thetas) theta = thetas if not isinstance(indx, np.int64): for i, j in enumerate(indx): rs[i] = r[j] thetas[i] = theta[j] # Transform to cartesian and plot if normalize: rs = rs/np.max(rs) x = rs * np.cos(thetas) y = rs * np.sin(thetas) try: len(x) except: x = [x] y = [y] if vectors: # plot r,theta as lines from origin for i, xi in enumerate(x): # print x[i], y[i] if arrowhead: arrowAngle = -(thetas[i]*360/(2*np.pi)+180) # convert to degrees, and correct orientation arrow = pg.ArrowItem(angle=arrowAngle, tailLen=0, tailWidth=1.5, **kwds) arrow.setPos(x[i], y[i]) self.plotItem.addItem(arrow) self.plotItem.plot([0., x[i]], [0., y[i]], **kwds) else: self.plotItem.plot(x, y, **kwds) self.rMax = np.max(y) self.data = {'x': x, 'y': y} def hist(self, r, theta, binwidth=np.pi/6., normalize=False, density=False, mode='straight', **kwds): """ plot puts the data into a polar plot as a histogram of the number of observations within a wedge the plot will be converted to a polar graph Parameters ---------- r : list or numpy array a list or array of radii theta : list or numpy array a list or array of angles (in radians) corresponding to the values in r binwidth : bin width, in radians optional vectors True means that plot is composed of vectors to each point radiating from the origin, defaults to 30 degrees (np.pi/6) normalize : boolean, optional normalize forces the plot to be scaled to the max values in r, defaults to False density : boolean, optional plot a count histogram, or a density histogram weighted by r values, defaults to False mode : str, optional 'straight' selects straight line between bars. 'arc' makes the end of the bar an arc (truer representation), defaults to 'straight' **kwds are passed to the data plot call. Returns ------- tuple : (list of rHist, list of bins) The histogram that was plotted (use for statistical comparisions) """ rs = np.array(r) thetas = np.array(theta) twopi = np.pi*2.0 for i, t in enumerate(thetas): # restrict to positive half plane [0....2*pi] while t < 0.0: t += twopi while t > twopi: t -= twopi thetas[i] = t bins = np.arange(0, np.pi*2+1e-12, binwidth) # compute histogram (rhist, rbins) = np.histogram(thetas, bins=bins, weights=rs, density=density) # Transform to cartesian and plot if normalize: rhist = rhist/np.max(rhist) xo = rhist * np.cos(bins[:-1]) # get cartesian form xp = rhist * np.cos(bins[:-1] + binwidth) yo = rhist * np.sin(bins[:-1]) yp = rhist * np.sin(bins[:-1] + binwidth) arcinc = np.pi/100. # arc increments for i in range(len(xp)): if mode == 'arc': self.plotItem.plot([xo[i], 0., xp[i]], [yo[i], 0., yp[i]], **kwds) # "v" segement arcseg = np.arange(bins[i], bins[i+1], arcinc) x = np.array(rhist[i] * np.cos(arcseg)) y = np.array(rhist[i] * np.sin(arcseg)) self.plotItem.plot(x, y, **kwds) else: self.plotItem.plot([0., xo[i], xp[i], 0.], [0., yo[i], yp[i], 0.], **kwds) self.data = {'x': xo, 'y': yo} self.rMax = np.max(yo) return (rhist, rbins) def circmean(self, alpha, axis=None): """ Compute the circular mean of a set of angles along the axis Parameters ---------- alpha : numpy array the angles to compute the circular mean of axis : int The axis of alpha for the computatoin, defaults to None Returns ------- float : the mean angle """ mean_angle = np.arctan2(np.mean(np.sin(alpha),axis),np.mean(np.cos(alpha),axis)) return mean_angle def talbotTicks(axl, **kwds): """ Adjust the tick marks using the talbot et al algorithm, on an existing plot. """ if type(axl) is not list: axl = [axl] for ax in axl: do_talbotTicks(ax, **kwds) def do_talbotTicks(ax, ndec=3, density=(1.0, 1.0), insideMargin=0.05, pointSize=None, tickPlacesAdd=(0,0)): """ Change the axis ticks to use the talbot algorithm for ONE axis Paramerters control the ticks Parameters ---------- ax : pyqtgraph axis instance the axis to change the ticks on ndec : int Number of decimals (would be passed to talbotTicks if that was being called) density : tuple tick density (for talbotTicks), defaults to (1.0, 1.0) insideMargin : float Inside margin space for plot, defaults to 0.05 (5%) pointSize : int point size for tick text, defaults to 12 tickPlacesAdd : tuple number of decimal places to add in tickstrings for the ticks, pair for x and y axes, defaults to (0,0) """ # get axis limits aleft = ax.getAxis('left') abottom = ax.getAxis('bottom') yRange = aleft.range xRange = abottom.range # now create substitue tick marks and labels, using Talbot et al algorithm xr = np.diff(xRange)[0] yr = np.diff(yRange)[0] xmin, xmax = (np.min(xRange) - xr * insideMargin, np.max(xRange) + xr * insideMargin) ymin, ymax = (np.min(yRange) - yr * insideMargin, np.max(yRange) + yr * insideMargin) xtick = ticks.Extended(density=density[0], figure=None, range=(xmin, xmax), axis='x') ytick = ticks.Extended(density=density[1], figure=None, range=(ymin, ymax), axis='y') xt = xtick() yt = ytick() xts = tickStrings(xt, scale=1, spacing=None, tickPlacesAdd = tickPlacesAdd[0]) yts = tickStrings(yt, scale=1, spacing=None, tickPlacesAdd = tickPlacesAdd[1]) xtickl = [[(x, xts[i]) for i, x in enumerate(xt)] , []] # no minor ticks here ytickl = [[(y, yts[i]) for i, y in enumerate(yt)] , []] # no minor ticks here #ticks format: [ (majorTickValue1, majorTickString1), (majorTickValue2, majorTickString2), ... ], aleft.setTicks(ytickl) abottom.setTicks(xtickl) # now set the point size (this may affect spacing from axis, and that would have to be adjusted - see the pyqtgraph google groups) if pointSize is not None: b = pg.QtGui.QFont() b.setPixelSize(pointSize) aleft.tickFont = b abottom.tickFont = b def violinPlotScatter(ax, data, symbolColor='k', symbolSize=4, symbol='o'): """ Plot data as violin plot with scatter and error bar Parameters ---------- ax : pyqtgraph plot instance is the axs to plot into data : dict dictionary containing {pos1: data1, pos2: data2}, where pos is the x position for the data in data. Each data set iis plotted as a separate column symcolor : string, optional color of the symbols, defaults to 'k' (black) symbolSize : int, optional Size of the symbols in the scatter plot, points, defaults to 4 symbol : string, optoinal The symbol to use, defaults to 'o' (circle) """ y = [] x = [] xb=np.arange(0,len(data.keys()), 1) ybm = [0]*len(data.keys()) # np.zeros(len(sdat.keys())) ybs = [0]*len(data.keys()) # np.zeros(len(sdat.keys())) for i, k in enumerate(data.keys()): yvals = np.array(data[k]) xvals = pg.pseudoScatter(yvals, spacing=0.4, bidir=True) * 0.2 ax.plot(x=xvals+i, y=yvals, pen=None, symbol=symbol, symbolSize=symbolSize, symbolBrush=pg.mkBrush(symbolColor)) y.append(yvals) x.append([i]*len(yvals)) ybm[i] = np.nanmean(yvals) ybs[i] = np.nanstd(yvals) mbar = pg.PlotDataItem(x=np.array([xb[i]-0.2, xb[i]+0.2]), y=np.array([ybm[i], ybm[i]]), pen={'color':'k', 'width':0.75}) ax.addItem(mbar) bar = pg.ErrorBarItem(x=xb, y=np.array(ybm), height=np.array(ybs), beam=0.2, pen={'color':'k', 'width':0.75}) violin_plot(ax, y, xb, bp=False) ax.addItem(bar) ticks = [[(v, k) for v, k in enumerate(data.keys())], []] ax.getAxis('bottom').setTicks(ticks) def violin_plot(ax, data, pos, dist=.0, bp=False): ''' create violin plots on an axis ''' if data is None or len(data) == 0: return # skip trying to do the plot dist = max(pos)-min(pos) w = min(0.15*max(dist,1.0),0.5) for i, d in enumerate(data): if d == [] or len(d) == 0: continue k = scipy.stats.gaussian_kde(d) #calculates the kernel density m = k.dataset.min() #lower bound of violin M = k.dataset.max() #upper bound of violin y = np.arange(m, M, (M-m)/100.) # support for violin v = k.evaluate(y) #violin profile (density curve) v = v / v.max() * w #scaling the violin to the available space c1 = pg.PlotDataItem(y=y, x=pos[i]+v, pen=pg.mkPen('k', width=0.5)) c2 = pg.PlotDataItem(y=y, x=pos[i]-v, pen=pg.mkPen('k', width=0.5)) #mean = k.dataset.mean() #vm = k.evaluate(mean) #vm = vm * w #ax.plot(x=np.array([pos[i]-vm[0], pos[i]+vm[0]]), y=np.array([mean, mean]), pen=pg.mkPen('k', width=1.0)) ax.addItem(c1) ax.addItem(c2) #ax.addItem(hbar) f = pg.FillBetweenItem(curve1=c1, curve2=c2, brush=pg.mkBrush((255, 255, 0, 96))) ax.addItem(f) if bp: pass # bpf = ax.boxplot(data, notch=0, positions=pos, vert=1) # pylab.setp(bpf['boxes'], color='black') # pylab.setp(bpf['whiskers'], color='black', linestyle='-') def labelAxes(plot, xtext, ytext, **kwargs): """ helper to label up the plot Parameters ----------- plot : plot item xtext : string text for x axis ytext : string text for y axis **kwargs : keywords additional arguments to pass to pyqtgraph setLabel """ plot.setLabel('bottom', xtext, **kwargs) plot.setLabel('left', ytext, **kwargs) def labelPanels(plot, label=None, **kwargs): r""" Helper to label up the plot Parameters ---------- plot : plot item label : plot panel label (for example, "A", "A1") \**kwargs : arguments for setPlotLabel """ if label is not None: setPlotLabel(plot, plotlabel="%s" % label, **kwargs) else: setPlotLabel(plot, plotlabel="") def labelTitles(plot, title=None, **kwargs): """ Set the title of a plotitem. Basic HTML formatting is allowed, along with "size", "bold", "italic", etc.. If the title is not defined, then a blank label is used A title is a text label that appears centered above the plot, in QGridLayout (position 0,2) of the plotitem. params ------- :param plotitem: The plot item to label :param title: The text string to use for the label :kwargs: keywords to pass to the pg.LabelItem :return: None """ if title is not None: plot.setTitle(title="<b><large>%s</large></b>" % title, visible=True, **kwargs) else: # clear the plot title plot.setTitle(title=" ") def setPlotLabel(plotitem, plotlabel='', **kwargs): """ Set the plotlabel of a plotitem. Basic HTML formatting is allowed, along with "size", "bold", "italic", etc.. If plotlabel is not defined, then a blank label is used A plotlabel is a text label that appears the upper left corner of the QGridLayout (position 0,0) of the plotitem. params ------- :param plotitem: The plot item to label :param plotlabel: The text string to use for the label :kwargs: keywords to pass to the pg.LabelItem :return: None """ plotitem.LabelItem = pg.LabelItem(plotlabel, **kwargs) plotitem.LabelItem.setMaximumHeight(30) # plotitem.layout.setRowFixedHeight(0, 30) try: plotitem.layout.addItem(plotitem.LabelItem, 0, 0) plotitem.LabelItem.setVisible(True) except: pass # not a valid thing to do with the plotitem... class LayoutMaker(): def __init__(self, win=None, cols=1, rows=1, letters=True, titles=False, labelEdges=True, margins=4, spacing=4, ticks='default', items='plots'): self.sequential_letters = string.ascii_uppercase self.cols = cols self.rows = rows self.letters = letters self.titles = titles self.edges = labelEdges self.margins = margins self.spacing = spacing self.rcmap = [None]*cols*rows self.plots = None self.items = items self.win = win self.ticks = ticks self._makeLayout(letters=letters, titles=titles, margins=margins, spacing=spacing) #self.addLayout(win) # def addLayout(self, win=None): # if win is not None: # win.setLayout(self.gridLayout) def getCols(self): return self.cols def getRows(self): return self.rows def mapFromIndex(self, index): """ for a given index, return the row, col tuple associated with the index """ return self.rcmap[index] def getPlot(self, index): """ return the plot item in the list corresponding to the index n """ if isinstance(index, tuple): r, c = index elif isinstance(index, int): r, c = self.rcmap[index] else: raise ValueError ('pyqtgraphPlotHelpers, LayoutMaker plot: index must be int or tuple(r,c)') return self.plots[r][c] def plot(self, index, x=None, y=None, pen=None, **kwargs): if x is None or y is None or pen is None: return None p = self.getPlot(index).plot(x=x,y=y, pen=pen, **kwargs) if self.ticks == 'talbot': talbotTicks(self.getPlot(index)) return p def _makeLayout(self, letters=True, titles=True, margins=4, spacing=4): """ Create a multipanel plot. The pyptgraph elements (widget, gridlayout, plots) are stored as class variables. The layout is always a rectangular grid with shape (cols, rows) if letters is true, then the plot is labeled "A, B, C..." Indices move horizontally first, then vertically margins sets the margins around the outside of the plot spacing sets the spacing between the elements of the grid If a window was specified (self.win is not None) then the grid layout will derive from that window's central item; otherwise we just make a gridLayout that can be put into another container somewhere. """ import string if self.win is None: self.app = pg.mkQApp() self.win = QtGui.QWidget() self.layout = QtGui.QGridLayout() self.win.setLayout(self.layout) # self.gridLayout = self.win.ci.layout # the window's 'central item' is the main gridlayout. # else: # print('b') # self.gridLayout = pg.QtGui.QGridLayout() # just create the grid layout to add to another item self.layout.setContentsMargins(margins, margins, margins, margins) self.layout.setSpacing(spacing) self.plots = [[0 for x in range(self.cols)] for x in range(self.rows)] self.gl = [[0 for x in range(self.cols)] for x in range(self.rows)] i = 0 for r in range(self.rows): for c in range(self.cols): self.rcmap[i] = (r, c) if self.items == 'plots': thisplot = pg.PlotWidget() self.layout.addWidget(thisplot, r, c) self.plots[r][c] = thisplot # pg.PlotWidget() if letters: labelPanels(self.plots[r][c], label=self.sequential_letters[i], size='14pt', bold=True) if titles: labelTitles(self.plots[r][c], title=self.sequential_letters[i], size='14pt', bold=False) elif self.items == 'images': imgview = pg.ImageView() # imgview.ui.roiBtn.hide() # imgview.ui.menuBtn.hide() # imgview.ui.histogram.hide() textlabel = pg.TextItem(f"t = 0", anchor=(0, 1.1)) self.layout.addWidget(imgview, r, c, 1, 1) self.plots[r][c] = imgview v = self.layout.itemAtPosition(r, c) # imgview # self.gridLayout.addItem(pg.ViewBox(), row=r, col=c) self.gl[r][c] = v # if letters: # labelPanels(self.plots[r][c], label=self.sequential_letters[i], size='14pt', bold=True) if titles: self.plots[r][c].getImageItem().getViewBox().setWindowTitle(self.sequential_letters[i], size='14pt') #labelTitles(self.plots[r][c], title=self.sequential_letters[i], size='14pt', bold=False) i += 1 if i > 25: i = 0 if self.items == 'plots': self.labelEdges('T(s)', 'Y', edgeOnly=self.edges) def labelEdges(self, xlabel='T(s)', ylabel='Y', edgeOnly=True, **kwargs): """ label the axes on the outer edges of the gridlayout, leaving the interior axes clean """ (lastrow, lastcol) = self.rcmap[-1] i = 0 for (r,c) in self.rcmap: if c == 0: ylab = ylabel elif edgeOnly: ylab = '' else: ylab = ylabel if r == self.rows-1: # only the last row xlab = xlabel elif edgeOnly: # but not other rows xlab = '' else: xlab = xlabel # otherwise, label it labelAxes(self.plots[r][c], xlab, ylab, **kwargs) i += 1 def axesEdges(self, edgeOnly=True): """ text labesls only on the axes on the outer edges of the gridlayout, leaving the interior axes clean """ (lastrow, lastcol) = self.rcmap[-1] i = 0 for (r,c) in self.rcmap: xshow = True yshow = True if edgeOnly and c > 0: yshow = False if edgeOnly and r < self.rows: # only the last row yshow = False ax = self.getPlot((r,c)) leftaxis = ax.getAxis('left') bottomaxis = ax.getAxis('bottom') #print dir(self.plots[r][c]) leftaxis.showValues = yshow bottomaxis.showValues = xshow i += 1 def columnAutoScale(self, col, axis='left'): """ autoscale the columns according to the max value in the column. Finds outside range of column data, then sets the scale of all plots in the column to that range """ atmax = None atmin = None for (r, c) in self.rcmap: if c != col: continue ax = self.getPlot((r,c)) thisaxis = ax.getAxis(axis) amin, amax = thisaxis.range if atmax is None: atmax = amax else: if amax > atmax: atmax = amax if atmin is None: atmin = amin else: if amin > atmin: atmin = amin self.columnSetScale(col, axis=axis, range=(atmin, atmax)) return(atmin, atmax) def columnSetScale(self, col, axis='left', range=(0., 1.)): """ Set the column scale """ for (r, c) in self.rcmap: if c != col: continue ax = self.getPlot((r,c)) if axis == 'left': ax.setYRange(range[0], range[1]) elif axis == 'bottom': ax.setXRange(range[0], range[1]) if self.ticks == 'talbot': talbotTicks(ax) def title(self, index, title='', **kwargs): """ add a title to a specific plot (specified by index) in the layout """ labelTitles(self.getPlot(index), title=title, **kwargs) def figure(title = None, background='w'): if background == 'w': pg.setConfigOption('background', 'w') # set background to white pg.setConfigOption('foreground', 'k') pg.mkQApp() win = pg.GraphicsWindow(title=title) return win def show(): pg.QtGui.QApplication.instance().exec_() def test_layout(win): """ Test the various plot types and modifications provided by the helpers above, in the context of a layout with various kinds of plots. """ layout = LayoutMaker(cols=4,rows=2, win=win, labelEdges=True, ticks='talbot') x=np.arange(0, 10., 0.1) y = np.sin(x*3.) # make an interesting signal r = np.random.random(10) # and a random signal theta = np.linspace(0, 2.*np.pi, 10, endpoint=False) # r, theta for polar plots for n in range(4*2): if n not in [1,2,3,4]: layout.plot(n, x, y) p = layout.getPlot(n) if n == 0: # crossed axes plot crossAxes(p, xyzero=[5., 0.], density=(0.75, 1.5), tickPlacesAdd=(1, 0), pointSize=12) layout.title(n, 'Crossed Axes') if n in [1,2,3]: # two differnt forms of polar plots if n == 1: po = polarPlot(p) po.setAxes(rMax=np.max(r)) po.plot(r, theta, pen=pg.mkPen('r')) layout.title(n, 'Polar Path') if n == 2: po = polarPlot(p) po.plot(r, theta, vectors=True, pen=pg.mkPen('k', width=2.0)) po.setAxes(rMax=np.max(r)) po.plot([np.mean(r)], [po.circmean(theta)], vectors=True, pen=pg.mkPen('r', width=2.0)) layout.title(n, 'Polar Arrows') if n == 3: po = polarPlot(p) po.hist(r, theta, binwidth=np.pi/6., normalize=False, density=False, pen='r') po.hist(r, theta, binwidth=np.pi/6., normalize=False, density=False, mode='arc', pen='b') po.setAxes(rMax=None) layout.title(n, 'Polar Histogram') if n == 4: # violin plot with scatter plot data data = {2: [3,5,7,9,2,4,6,8,7,2,3,1,2.5], 3: [5, 6, 7, 9, 2, 8, 10, 9.5, 11]} violinPlotScatter(p, data, symbolColor='r') p.setYRange(0, 12) layout.title(n, 'Violin Plots with PseudoScatter') if n == 5: # clean plot for physiology with baseline reference and a calibration bar calbar(p, calbar=[7.0, -1.5, 2.0, 0.5], axesoff=True, orient='left', unitNames={'x': 'ms', 'y': 'nA'}) refline(p, refline=0., color = [64, 64, 64], linestyle = '--' ,linewidth = 0.5) layout.title(n, 'Calbar and Refline') #talbotTicks(layout.getPlot(1)) layout.columnAutoScale(col=3, axis='left') show() def test_crossAxes(win): layout = LayoutMaker(cols=1,rows=1, win=win, labelEdges=True) x=np.arange(-1, 1., 0.01) y = np.sin(x*10.) layout.plot(0, x, y) p = layout.getPlot(0) crossAxes(p, xyzero=[0., 0.], limits=[None, None, None, None], density=1.5, tickPlacesAdd=1, pointSize=12) show() def test_polarPlot(win): layout = LayoutMaker(cols=1,rows=1, win=win, labelEdges=True) po = polarPlot(layout.getPlot((0,0))) # convert rectangular plot to polar po.setAxes(steps=4, rMax=100, makeGrid=True) # build the axes nvecs = 50 #th = np.linspace(-np.pi*2, np.pi*2-np.pi*2/nvecs, nvecs) th = np.linspace(-np.pi*4, 0, nvecs) r = np.linspace(10, 100, nvecs) po.plot(r, th, vectors=True, arrowhead=True, symbols='o', pen=pg.mkPen('k', width=1.5)) # plot with arrowheads nvecs=8 th = np.linspace(-np.pi*2, np.pi*2-np.pi*2/nvecs, nvecs) r = np.linspace(10, 100, nvecs) #po.plot(r, th, vectors=True, arrowhead=False, symbols='o', pen=pg.mkPen('r', width=1.5)) # plot with just lines show() if __name__ == '__main__': win = figure(title='testing') test_layout(win) #test_crossAxes(win) #test_polarPlot(win)
import numpy as np from oolearning.splitters.StratifiedDataSplitter import StratifiedDataSplitter class ClassificationStratifiedDataSplitter(StratifiedDataSplitter): """ Splits the data into training/holdout sets while maintaining the categorical proportions of the target variable """ def labels_to_stratify(self, target_values: np.ndarray) -> np.ndarray: return target_values # for classification, we are just going to use the target variables as is.
import warnings import numpy as np from thimbles.sqlaimports import * __all__ = """ from_spectre_chebyshev from_spectre_legendre """.split() # ########################################################################### # def from_spectre_chebyshev(pixels, coefficients): # THIS VERSION TAKEN FROM SPECTRE #c20 p = (point - c(6))/c(7) #c xpt = (2.*p-(c(9)+c(8)))/(c(9)-c(8)) # !! is this right? #transforming coefficients #wvs = coeff[0] + xpts*coeff[1] + coeff[2]*(2.0*xpts**2.0-1.0) + coeff[3]*xpts*(4.0*xpts**2.0-3.0)+coeff[4]*(8.0*xpts**4.0-8.0*xpts**2.0+1.0) print("chebyshev with coefficients {}".format(coefficients)) n = len(pixels) xpts = (2.0*pixels - float(n+1))/float(n-1) return np.polynomial.chebyshev.chebval(xpts, coefficients) def from_spectre_legendre(pixels, coefficients): print("generating wavelengths from legendre polynomial coefficients {}".format(coefficients)) pixels = np.asarray(pixels) n = len(pixels) xpts = (2.0*pixels - float(n+1))/float(n-1) return np.polyval(coefficients, xpts)
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. # # NVIDIA CORPORATION and its licensors retain all intellectual property # and proprietary rights in and to this software, related documentation # and any modifications thereto. Any use, reproduction, disclosure or # distribution of this software and related documentation without an express # license agreement from NVIDIA CORPORATION is strictly prohibited. """Compare average power spectra between real and generated images, or between multiple generators.""" import os import numpy as np import torch import torch.fft import scipy.ndimage import matplotlib.pyplot as plt import click import tqdm import dnnlib import legacy from training import dataset #---------------------------------------------------------------------------- # Setup an iterator for streaming images, in uint8 NCHW format, based on the # respective command line options. def stream_source_images(source, num, seed, device, data_loader_kwargs=None): # => num_images, image_size, image_iter ext = source.split('.')[-1].lower() if data_loader_kwargs is None: data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2) if ext == 'pkl': if num is None: raise click.ClickException('--num is required when --source points to network pickle') with dnnlib.util.open_url(source) as f: G = legacy.load_network_pkl(f)['G_ema'].to(device) def generate_image(seed): rnd = np.random.RandomState(seed) z = torch.from_numpy(rnd.randn(1, G.z_dim)).to(device) c = torch.zeros([1, G.c_dim], device=device) if G.c_dim > 0: c[:, rnd.randint(G.c_dim)] = 1 return (G(z=z, c=c) * 127.5 + 128).clamp(0, 255).to(torch.uint8) _ = generate_image(seed) # warm up image_iter = (generate_image(seed + idx) for idx in range(num)) return num, G.img_resolution, image_iter elif ext == 'zip' or os.path.isdir(source): dataset_obj = dataset.ImageFolderDataset(path=source, max_size=num, random_seed=seed) if num is not None and num != len(dataset_obj): raise click.ClickException(f'--source contains fewer than {num} images') data_loader = torch.utils.data.DataLoader(dataset_obj, batch_size=1, **data_loader_kwargs) image_iter = (image.to(device) for image, _label in data_loader) return len(dataset_obj), dataset_obj.resolution, image_iter else: raise click.ClickException('--source must point to network pickle, dataset zip, or directory') #---------------------------------------------------------------------------- # Load average power spectrum from the specified .npz file and construct # the corresponding heatmap for visualization. def construct_heatmap(npz_file, smooth): npz_data = np.load(npz_file) spectrum = npz_data['spectrum'] image_size = npz_data['image_size'] hmap = np.log10(spectrum) * 10 # dB hmap = np.fft.fftshift(hmap) hmap = np.concatenate([hmap, hmap[:1, :]], axis=0) hmap = np.concatenate([hmap, hmap[:, :1]], axis=1) if smooth > 0: sigma = spectrum.shape[0] / image_size * smooth hmap = scipy.ndimage.gaussian_filter(hmap, sigma=sigma, mode='nearest') return hmap, image_size #---------------------------------------------------------------------------- @click.group() def main(): """Compare average power spectra between real and generated images, or between multiple generators. Example: \b # Calculate dataset mean and std, needed in subsequent steps. python avg_spectra.py stats --source=~/datasets/ffhq-1024x1024.zip \b # Calculate average spectrum for the training data. python avg_spectra.py calc --source=~/datasets/ffhq-1024x1024.zip \\ --dest=tmp/training-data.npz --mean=112.684 --std=69.509 \b # Calculate average spectrum for a pre-trained generator. python avg_spectra.py calc \\ --source=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhq-1024x1024.pkl \\ --dest=tmp/stylegan3-r.npz --mean=112.684 --std=69.509 --num=70000 \b # Display results. python avg_spectra.py heatmap tmp/training-data.npz python avg_spectra.py heatmap tmp/stylegan3-r.npz python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz \b # Save as PNG. python avg_spectra.py heatmap tmp/training-data.npz --save=tmp/training-data.png --dpi=300 python avg_spectra.py heatmap tmp/stylegan3-r.npz --save=tmp/stylegan3-r.png --dpi=300 python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz --save=tmp/slices.png --dpi=300 """ #---------------------------------------------------------------------------- @main.command() @click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True) @click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1)) @click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) def stats(source, num, seed, device=torch.device('cuda')): """Calculate dataset mean and standard deviation needed by 'calc'.""" torch.multiprocessing.set_start_method('spawn') num_images, _image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device) # Accumulate moments. moments = torch.zeros([3], dtype=torch.float64, device=device) for image in tqdm.tqdm(image_iter, total=num_images): image = image.to(torch.float64) moments += torch.stack([torch.ones_like(image).sum(), image.sum(), image.square().sum()]) moments = moments / moments[0] # Compute mean and standard deviation. mean = moments[1] std = (moments[2] - moments[1].square()).sqrt() print(f'--mean={mean:g} --std={std:g}') #---------------------------------------------------------------------------- @main.command() @click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True) @click.option('--dest', help='Where to store the result', metavar='NPZ', required=True) @click.option('--mean', help='Dataset mean for whitening', metavar='FLOAT', type=float, required=True) @click.option('--std', help='Dataset standard deviation for whitening', metavar='FLOAT', type=click.FloatRange(min=0), required=True) @click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1)) @click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) @click.option('--beta', help='Shape parameter for the Kaiser window', metavar='FLOAT', type=click.FloatRange(min=0), default=8, show_default=True) @click.option('--interp', help='Frequency-domain interpolation factor', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True) def calc(source, dest, mean, std, num, seed, beta, interp, device=torch.device('cuda')): """Calculate average power spectrum and store it in .npz file.""" torch.multiprocessing.set_start_method('spawn') num_images, image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device) spectrum_size = image_size * interp padding = spectrum_size - image_size # Setup window function. window = torch.kaiser_window(image_size, periodic=False, beta=beta, device=device) window *= window.square().sum().rsqrt() window = window.ger(window).unsqueeze(0).unsqueeze(1) # Accumulate power spectrum. spectrum = torch.zeros([spectrum_size, spectrum_size], dtype=torch.float64, device=device) for image in tqdm.tqdm(image_iter, total=num_images): image = (image.to(torch.float64) - mean) / std image = torch.nn.functional.pad(image * window, [0, padding, 0, padding]) spectrum += torch.fft.fftn(image, dim=[2,3]).abs().square().mean(dim=[0,1]) spectrum /= num_images # Save result. if os.path.dirname(dest): os.makedirs(os.path.dirname(dest), exist_ok=True) np.savez(dest, spectrum=spectrum.cpu().numpy(), image_size=image_size) #---------------------------------------------------------------------------- @main.command() @click.argument('npz-file', nargs=1) @click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]') @click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True) @click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=1.25, show_default=True) def heatmap(npz_file, save, smooth, dpi): """Visualize 2D heatmap based on the given .npz file.""" hmap, image_size = construct_heatmap(npz_file=npz_file, smooth=smooth) # Setup plot. plt.figure(figsize=[6, 4.8], dpi=dpi, tight_layout=True) freqs = np.linspace(-0.5, 0.5, num=hmap.shape[0], endpoint=True) * image_size ticks = np.linspace(freqs[0], freqs[-1], num=5, endpoint=True) levels = np.linspace(-40, 20, num=13, endpoint=True) # Draw heatmap. plt.xlim(ticks[0], ticks[-1]) plt.ylim(ticks[0], ticks[-1]) plt.xticks(ticks) plt.yticks(ticks) plt.contourf(freqs, freqs, hmap, levels=levels, extend='both', cmap='Blues') plt.gca().set_aspect('equal') plt.colorbar(ticks=levels) plt.contour(freqs, freqs, hmap, levels=levels, extend='both', linestyles='solid', linewidths=1, colors='midnightblue', alpha=0.2) # Display or save. if save is None: plt.show() else: if os.path.dirname(save): os.makedirs(os.path.dirname(save), exist_ok=True) plt.savefig(save) #---------------------------------------------------------------------------- @main.command() @click.argument('npz-files', nargs=-1, required=True) @click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]') @click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True) @click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=0, show_default=True) def slices(npz_files, save, dpi, smooth): """Visualize 1D slices based on the given .npz files.""" cases = [dnnlib.EasyDict(npz_file=npz_file) for npz_file in npz_files] for c in cases: c.hmap, c.image_size = construct_heatmap(npz_file=c.npz_file, smooth=smooth) c.label = os.path.splitext(os.path.basename(c.npz_file))[0] # Check consistency. image_size = cases[0].image_size hmap_size = cases[0].hmap.shape[0] if any(c.image_size != image_size or c.hmap.shape[0] != hmap_size for c in cases): raise click.ClickException('All .npz must have the same resolution') # Setup plot. plt.figure(figsize=[12, 4.6], dpi=dpi, tight_layout=True) hmap_center = hmap_size // 2 hmap_range = np.arange(hmap_center, hmap_size) freqs0 = np.linspace(0, image_size / 2, num=(hmap_size // 2 + 1), endpoint=True) freqs45 = np.linspace(0, image_size / np.sqrt(2), num=(hmap_size // 2 + 1), endpoint=True) xticks0 = np.linspace(freqs0[0], freqs0[-1], num=9, endpoint=True) xticks45 = np.round(np.linspace(freqs45[0], freqs45[-1], num=9, endpoint=True)) yticks = np.linspace(-50, 30, num=9, endpoint=True) # Draw 0 degree slice. plt.subplot(1, 2, 1) plt.title('0\u00b0 slice') plt.xlim(xticks0[0], xticks0[-1]) plt.ylim(yticks[0], yticks[-1]) plt.xticks(xticks0) plt.yticks(yticks) for c in cases: plt.plot(freqs0, c.hmap[hmap_center, hmap_range], label=c.label) plt.grid() plt.legend(loc='upper right') # Draw 45 degree slice. plt.subplot(1, 2, 2) plt.title('45\u00b0 slice') plt.xlim(xticks45[0], xticks45[-1]) plt.ylim(yticks[0], yticks[-1]) plt.xticks(xticks45) plt.yticks(yticks) for c in cases: plt.plot(freqs45, c.hmap[hmap_range, hmap_range], label=c.label) plt.grid() plt.legend(loc='upper right') # Display or save. if save is None: plt.show() else: if os.path.dirname(save): os.makedirs(os.path.dirname(save), exist_ok=True) plt.savefig(save) #---------------------------------------------------------------------------- if __name__ == "__main__": main() # pylint: disable=no-value-for-parameter #----------------------------------------------------------------------------
from __future__ import annotations import logging from textwrap import dedent from typing import Union import warnings import numpy as np import param import xarray as xr from GSForge._singledispatchmethod import singledispatchmethod from ._AnnotatedGEM import AnnotatedGEM from ._GeneSetCollection import GeneSetCollection from ..utils import transient_log_handler logger = logging.getLogger("GSForge") # TODO: Add .keys() and other dict like functionality. class Interface(param.Parameterized): """The Interface provides common API access for interacting with the ``AnnotatedGEM`` and ``GeneSetCollection`` objects. """ gem = param.ClassSelector(class_=AnnotatedGEM, doc="""\ An ``AnnotatedGEM`` object.""", default=None, precedence=-1.0) gene_set_collection = param.ClassSelector(class_=GeneSetCollection, doc=dedent("""\ A ``GeneSetCollection`` object."""), default=None, precedence=-1.0) selected_gene_sets = param.ListSelector(default=[None], doc=dedent("""\ A list of keys from the provided GeneSetCollection (stored in gene_set_collection) that are to be used for selecting sets of genes from the count matrix.""")) selected_genes = param.Parameter(default=None, doc=dedent("""\ A list of genes to use in indexing from the count matrix. This parameter takes priority over all other gene selecting methods. That means that selected GeneSets (or combinations thereof) will have no effect."""), precedence=-1.0) gene_set_mode = param.ObjectSelector( default="union", objects=["complete", "union", "intersection"], doc=dedent("""\ Controls how any selected gene sets are returned by the interface. **complete** Returns the entire gene set of the ``AnnotatedGEM``. **union** Returns the union of the selected gene sets support. **intersection** Returns the intersection of the selected gene sets support. """)) sample_subset = param.Parameter(default=None, precedence=-1.0, doc=dedent("""\ A list of samples to use in a given operation. These can be supplied directly as a list of genes, or can be drawn from a given GeneSet.""")) count_variable = param.ObjectSelector(default=None, precedence=1.0, doc="The name of the count matrix used.", objects=[None], check_on_set=False) annotation_variables = param.List(doc=dedent("""\ The name of the active annotation variable(s). These are the annotation columns that will be control the subset returned by ``y_annotation_data``."""), precedence=-1.0, default=[None]) count_mask = param.ObjectSelector(doc=dedent("""\ The type of mask to use for the count matrix. **complete** Returns the entire count matrix as numbers. **masked** Returns the entire count matrix with zero or missing as NaN values. **dropped** Returns the count matrix without genes that have zero or missing values. """), default='complete', objects=["complete", "masked", "dropped"], precedence=1.0) annotation_mask = param.ObjectSelector(doc=dedent("""\ The type of mask to use for the target array. **complete** Returns the entire target array. **dropped** Returns the target array without samples that have zero or missing values. """), default='complete', objects=["complete", "dropped"], precedence=-1.0) count_transform = param.Callable(default=None, precedence=-1.0, doc=dedent("""\ A transform that will be run on the `x_data` that is supplied by this Interface. The transform runs on the subset of the matrix that has been selected.""")) @singledispatchmethod def _interface_dispatch(*args, **params): raise TypeError(f"Source of type: {type(args[0])} not supported.") def __init__(self, *args, **params): # If the user passes a string, place it as a single item within a list. if isinstance(params.get("annotation_variables"), str): params["annotation_variables"] = [params.get("annotation_variables")] if isinstance(params.get("selected_gene_sets"), str): params["selected_gene_sets"] = [params.get("selected_gene_sets")] if args: params = self._interface_dispatch(*args, **params) super().__init__(**params) @_interface_dispatch.register(AnnotatedGEM) @staticmethod def _parse_annotated_gem(annotated_gem: AnnotatedGEM, *_args, **params) -> dict: """ Parse arguments for creation of a new `Interface` instance from an `AnnotatedGEM`. Parameters ---------- annotated_gem : AnnotatedGEM A `GSForge.AnnotatedGEM` object. _args : Not used. params : Parameters to initialize this `Interface` with. Returns ------- params : dict A parsed parameter dictionary. """ params = {"gem": annotated_gem, "count_variable": annotated_gem.count_array_name, **params} return params @_interface_dispatch.register(GeneSetCollection) @staticmethod def _parse_gene_set_collection(gene_set_collection: GeneSetCollection, *_args, **params) -> dict: """ Parse arguments for creation of a new `Interface` instance from an `GeneSetCollection`. Parameters ---------- annotated_gem : AnnotatedGEM A `GSForge.AnnotatedGEM` object. _args : Not used. params : Parameters to initialize this `Interface` with. Returns ------- params : dict A parsed parameter dictionary. """ if gene_set_collection.gem is not None: params = {"gem": gene_set_collection.gem, "count_variable": gene_set_collection.gem.count_array_name, **params} params = {"gene_set_collection": gene_set_collection, **params} return params @property def active_count_variable(self) -> str: """Returns the name of the currently active count matrix.""" if self.count_variable is not None: count_variable = self.count_variable else: count_variable = self.gem.count_array_name return count_variable @property def gene_index_name(self) -> str: """Returns the name of the gene index.""" return self.gem.gene_index_name @property def sample_index_name(self) -> str: """Returns the name of the sample index.""" return self.gem.sample_index_name def get_sample_index(self) -> np.ndarray: """ Get the currently selected sample index as a numpy array. Returns ------- np.ndarray An array of the currently selected samples. """ logger.info(f'Determining sample index.') if self.sample_subset is not None: # We need to load the data if a user-list is supplied to prevent some straneg issues # with nested numpy arrays being returned. self.gem.data.load() subset = self.gem.data.sel({self.gem.sample_index_name: self.sample_subset}) logger.info('No sample subset selected.') else: subset = self.gem.data if self.annotation_mask == "complete": pass elif self.annotation_mask == "dropped": if self.annotation_variables != [None]: subset = subset[self.annotation_variables].dropna(dim=self.gem.sample_index_name) subset = subset.dropna(dim=self.gem.sample_index_name) logger.info( f'Dropped samples with missing labels, {subset[self.gem.sample_index_name].shape[0]} samples remain.') selected_samples = subset[self.gem.sample_index_name].copy(deep=True).values return selected_samples @property def get_selection_indices(self) -> dict: """Returns the currently selected indexes as a dictionary.""" return {self.gem.gene_index_name: self.get_gene_index(), self.gem.sample_index_name: self.get_sample_index()} @property def x_count_data(self) -> Union[xr.DataArray, None]: """ Returns the currently selected 'x_data'. Usually this will be a subset of the active count array. Note: In constructing the a gene index, the count data is constructed first in order to infer coordinate selection based on masking. Returns ------- xarray.Dataset The selection of the currently active count data. """ gene_set_combinations = { "union": lambda sel_gs: self.gene_set_collection.union(sel_gs), "intersection": lambda sel_gs: self.gene_set_collection.intersection(sel_gs), "joint_difference": lambda sel_gs: self.gene_set_collection.joint_difference(sel_gs), "complete": lambda sel_gs: self.gem.gene_index, } mask_modes = { "complete": lambda c: c.fillna(0), "masked": lambda c: c.where(c > 0.0), "dropped": lambda c: c.where(c > 0.0).dropna(dim=self.gem.gene_index_name), } # Ensure the correct count array is selected. count_variable = self.count_variable if self.count_variable is not None else self.gem.count_array_name logger.info(f'Preparing count data from the variable {count_variable}.') if self.selected_gene_sets == ['all']: self.param.set_param(selected_gene_sets=list(self.gene_set_collection.gene_sets.keys())) if self.selected_genes is not None: support = self.selected_genes logger.info(f'Gene selection of: {support.shape[0]} genes provided.') elif self.selected_gene_sets == [None]: support = self.gem.gene_index # Otherwise, use some combination of GeneSet supports should be used. else: support = gene_set_combinations[self.gene_set_mode](self.selected_gene_sets) logger.info( f'Selected {len(self.selected_gene_sets)} GeneSets, ' f'using mode: {self.gene_set_mode} for a support of size: {support.shape[0]}.') # Now the counts can be selected based on the gene support, and the selected samples. sample_support = self.get_sample_index() # Check the overlap of the selected genes with those in the GEM index. # Give the user a warning if any unavailable genes are requested, but still # return the available genes. avail_support = np.intersect1d(self.gem.gene_index, support) if len(avail_support) < len(support): diff = len(support) - len(avail_support) warnings.warn( f'{diff} Unavailable genes of the {len(support)} requested. ' f'Using the {len(avail_support)} available.', UserWarning) counts = self.gem.data.sel({self.gem.gene_index_name: avail_support, self.gem.sample_index_name: sample_support})[count_variable] logger.info(f'Selected count array of shape: {counts.shape}') logger.info(f'Preparing count data using mask mode {self.count_mask}.') counts = mask_modes[self.count_mask](counts) # Optional transform. if self.count_transform is not None: logger.info(f'Applying given transform to counts...') counts = self.count_transform(counts.copy(deep=True)) return counts def get_gene_index(self) -> np.array: """ Get the currently selected gene index as a numpy array. Returns ------- np.ndarray An array of the currently selected genes. """ logger.info(f'Preparing the gene index, this requires determining x_data.') return self.x_count_data[self.gene_index_name].values.copy() @property def y_annotation_data(self) -> Union[xr.Dataset, xr.DataArray, None]: """ Returns the currently selected 'y_data', or None, based on the `selected_annotation_variables` parameter. Returns ------- An ``xarray.Dataset`` of the currently selected y_data. """ if (self.annotation_variables is None) or (self.annotation_variables == [None]): logger.info('No annotations selected.') return None logger.info(f'The following annotations where selected: {self.annotation_variables}.') sample_index = self.get_sample_index() # If only one label has been selected, return this as an xarray.DataArray. if len(self.annotation_variables) == 1: return self.gem.data[self.annotation_variables].sel( {self.gem.sample_index_name: sample_index})[self.annotation_variables[0]].copy(deep=True) return self.gem.data[self.annotation_variables].sel( {self.gem.sample_index_name: sample_index}).copy(deep=True) def get_gem_data(self, single_object=False, output_type='xarray', **params): """ Returns count [and annotation] data based on the current parameters. Users should call gsf.get_gem_data """ if params: logger.info(f'params {params}') self.param.set_param(**params) def xarray_single(self_): logger.info('Returning data as a single ``xarray.DataArray``.') if self_.y_annotation_data is not None: data = xr.merge([self_.x_count_data, self_.y_annotation_data]) else: data = self_.x_count_data return data def xarray_tuple(self_): logger.info('Returning counts as an xarray.DataArray and annotations as an xarray.Dataset.') return self_.x_count_data, self_.y_annotation_data def pandas_single(self_): logger.info('Returning counts and annotations as a single pandas.DataFrame.') if self_.y_annotation_data is not None: data = xr.merge([self_.x_count_data, self_.y_annotation_data]) else: data = self_.x_count_data return data.to_dataframe() def pandas_tuple(self_): # It may be faster to call .values and create a new dataframe instead of unstacking. logger.info('Returning a tuple of counts and annotations each as a pandas.DataFrame.') if self_.y_annotation_data is not None: return self_.x_count_data.to_dataframe().unstack().droplevel(0, axis=1), \ self_.y_annotation_data.to_dataframe() else: return self_.x_count_data.to_dataframe().unstack().droplevel(0, axis=1), self_.y_annotation_data def numpy_single(self_): return np.dstack(self_.x_count_data.values, self_.y_annotation_data.values) def numpy_tuple(self_): return self_.x_count_data.values, self_.y_annotation_data.values modes = { ('xarray', True): xarray_single, ('xarray', False): xarray_tuple, ('pandas', True): pandas_single, ('pandas', False): pandas_tuple, ('numpy', True): numpy_single, ('numpy', False): numpy_tuple, } key = (output_type, single_object) # TODO: Clarify error message. if key not in modes.keys(): raise ValueError(f'key given: {key} is not one of the available' f'types: {list(modes.keys())}') return modes[key](self) class CallableInterface(Interface, param.ParameterizedFunction): @transient_log_handler def __new__(cls, *args, **params): logger.debug('Creating a new GSForge.CallableInterface instance.') if args: params = Interface._interface_dispatch(*args, **params) # TODO: Convert to a helper function or consider the implementation in operations.core (commented out). if isinstance(params.get("annotation_variables"), str): params["annotation_variables"] = [params.get("annotation_variables")] if isinstance(params.get("selected_gene_sets"), str): params["selected_gene_sets"] = [params.get("selected_gene_sets")] inst = cls.instance(**params) # See the param code for more on this `instance` function. return inst.__call__() def __call__(self): raise NotImplementedError
# -*- coding: utf-8 -*- """FinalProject_ocr Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/1XBGkT-BmxEB1u9TxmfzaZelc33AByeke """ !sudo apt install tesseract-ocr !pip install pytesseract import pytesseract import PIL import pandas import numpy as np import cv2 from google.colab.patches import cv2_imshow import regex as re import glob import matplotlib.pyplot as plt #Create a new directory and set it as the place to look for images we want to analyze !mkdir /images image_dir = "/images" #Create a list of all the image paths that are found in our image directory images = glob.glob(image_dir + '/*.*') #This function will use Regex to search for specific criteria and mask the text if found def searchText(data,x,y,w,h): #Search for phone numbers if re.search(r"(\d{3}[-\.\s]??\d{3}[-\.\s]??\d{4}|\(\d{3}\)\s*\d{3}[-\.\s]??\d{4}|\d{3}[-\.\s]??\d{4})", data): cv2.rectangle(image,(x,y),(x+w,y+h),(128,128,128),-1) cv2.putText(image, "Phone Number Hidden", (x,y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,204), 2) #Search for email addresses if re.search(r"^(\w|\.|\_|\-)+[@](\w|\_|\-|\.)+[.]\w{2,3}", data): cv2.rectangle(image,(x,y),(x+w,y+h),(128,128,128),-1) cv2.putText(image, "Email Address Hidden", (x,y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,204), 2) #Loop through all the individual images and perform multiple functions for file in images: #For every image we are going to store 3 copies so that each copy can be recalled for a specific use. #The 'image' copy will be the primary version that we will be working with original = cv2.imread(file) image = cv2.imread(file) image2 = cv2.imread(file) #Convert the 'image' file to grayscale, for improved processing gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #Apply thresholding to the image in order to highlight pixels that meet criteria thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] #Process the image to accentuate features/objects and to find the contours/edges of the features kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5)) inverted_thresh = 255 - thresh dilate = cv2.dilate(inverted_thresh, kernel, iterations=4) cnts = cv2.findContours(dilate, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) #Use PyTesseract's 'image to data' function to extract the text found and store it in a dataframe text = pytesseract.image_to_data(image, output_type='data.frame') #Remove any objects where the confidence is -1 text = text[text.conf != -1] #Group features (i.e. text) that have the same block number (located near each other) lines = text.groupby('block_num')['text'].apply(list) conf = text.groupby(['block_num'])['conf'].mean() #Set the contours from the output of the findContours function cnts = cnts[0] if len(cnts) == 2 else cnts[1] #Loop through contour coordinates for c in cnts: x,y, w, h = cv2.boundingRect(c) #Filter selection of image based on contour coordinates ROI = thresh[y:y + h, x:x + w] #Extract text from selected area of image data = pytesseract.image_to_string(ROI, lang='eng', config='--psm 6').lower() #Place a rectangle around the coordinates and save the output as 'image2' cv2.rectangle(image2, (x, y), (x + w, y + h), (0, 255, 0), 0) #Call the searchText function to determine if the current data value contains information we wish to hide searchText(data,x,y,w,h) #For each image analyzed create a 3 image output #Original image, image with text identified, and then the image with data masked list_img = [original, image2, image] imgs_comb = np.hstack(list_img) cv2_imshow(imgs_comb)
import os import cv2 import matplotlib.pyplot as plt import numpy as np from matplotlib import interactive import scipy def rgb_min_image(image): # extractes the min of the rgb values and outputs # a gray scale image rgb_image = np.amin(image, axis= 2) return rgb_image def min_filter(image): # perfroms the min filter on 15 by 15 area for k in range (3): i_image = image.copy() temp_image = image[:,:,k].copy() [row,col] = temp_image.shape temp_image = cv2.copyMakeBorder(temp_image, 14, 14, 14, 14, cv2.BORDER_REFLECT) for i in range(row): for j in range(col): i_image[i,j,k] = (temp_image[i:14+i,j:14+j]).min() return i_image def dark_channel(image): # output the dark channel as the image new_image = image.copy() min_image = min_filter(new_image) dark_prior = rgb_min_image(min_image) return dark_prior def transmition_map(image,A,w): #finds the transmition map for the image image_new = np.divide(image,A).astype(float) new_dark = dark_channel(image_new) transmition = 1 - w*new_dark return transmition def A_estimator(image,dark_prior): #Used the information extracted from the dark prior #find a value for A image_copy = image.copy() [row,col,dem] = image_copy.shape dark_copy = dark_prior.copy() num = np.round(row*col*0.001).astype(int) j = sorted(np.asarray(dark_copy).reshape(-1), reverse=True)[:num] ind = np.unravel_index(j[0], dark_copy.shape) max_val = image_copy[ind[0],ind[1],:] for element in j: ind = np.unravel_index(element, dark_copy.shape) if (sum(max_val[:]) < sum(image_copy[ind[0],ind[1],:])): max_val[:] = image_copy[ind[0],ind[1],:] A = image_copy A[:,:,:] = max_val[:] return A def Radience_cal(image,A,Transmission_map,t_not): #Used information from the transmit map to remove haze from the image. image_copy = image.copy() A_copy = A.copy() Transmission_map_copy = (Transmission_map.copy()).astype(float) divisor = np.maximum(Transmission_map_copy,t_not) radience = (image.copy()).astype(float) for i in range(3): radience[:,:,i] = np.divide(((image_copy[:,:,i]).astype(float) - A[0,0,i]),divisor) + A[0,0,i] #radience = 255*(radience/np.max(radience)) radience[radience>255]=255 radience[radience<0]=0 return radience.astype('uint8') def L_calculator(image,Transmission_map): #helps fine tune the transmition map for a better result epsalon = 10**(-8) h,w = image.shape[:2] window_area = (2*r + 1)**2 n_vals = (w - 2*r)*(h - 2*r)*window_area**2 k = 0 # data for matting laplacian in coordinate form i = np.empty(n_vals, dtype=np.int32) j = np.empty(n_vals, dtype=np.int32) v = np.empty(n_vals, dtype=np.float64) # for each pixel of image for y in range(r, h - r): for x in range(r, w - r): # gather neighbors of current pixel in 3x3 window n = image[y-r:y+r+1, x-r:x+r+1] u = np.zeros(3) for p in range(3): u[p] = n[:, :, p].mean() c = n - u # calculate covariance matrix over color channels cov = np.zeros((3, 3)) for p in range(3): for q in range(3): cov[p, q] = np.mean(c[:, :, p]*c[:, :, q]) # calculate inverse covariance of window inv_cov = np.linalg.inv(cov + epsilon/window_area * np.eye(3)) # for each pair ((xi, yi), (xj, yj)) in a 3x3 window for dyi in range(2*r + 1): for dxi in range(2*r + 1): for dyj in range(2*r + 1): for dxj in range(2*r + 1): i[k] = (x + dxi - r) + (y + dyi - r)*w j[k] = (x + dxj - r) + (y + dyj - r)*w temp = c[dyi, dxi].dot(inv_cov).dot(c[dyj, dxj]) v[k] = (1.0 if (i[k] == j[k]) else 0.0) - (1 + temp)/window_area k += 1 h,w = Transmission_map.shape L = scipy.sparse.csr_matrix((v, (i, j)), shape=(w*h, w*h)) return L def soft_matting(L,image,t_map): image_copy = image.copy() lamda = 10**(-4) U = np.identity(L.shape[0]) t_map_mat = t_map*(L+lamda*U)/lamda return t_map_mat def guided_filter(image,guide,diameter,epsilon): w_size = diameter+1 # Exatrcation the mean of the image by blurring meanI=cv2.blur(image,(w_size,w_size)) mean_Guide=cv2.blur(guide,(w_size,w_size)) # Extracting the auto correlation II=image**2 corrI=cv2.blur(II,(w_size,w_size)) # Finding the correlation between image and guide I_guide=image*guide corrIG=cv2.blur(I_guide,(w_size,w_size)) # using the mean of the image to find the variance of each point varI=corrI-meanI**2 covIG=corrIG-meanI*mean_Guide #covIG normalized with a epsilon factor a=covIG/(varI+epsilon) #a is used to find the b b=mean_Guide-a*meanI meanA=cv2.blur(a,(w_size,w_size)) meanB=cv2.blur(b,(w_size,w_size)) transmission_rate=meanA*image+meanB # normalizaing of the transimational map transmission_rate = transmission_rate/np.max(transmission_rate) return transmission_rate #--------------------------------------------------------------------------------------- #--------------------------------------------------------------------------------------- # inputing the information base_path = os.getcwd() test_Haze = os.listdir(base_path+ '/data_set/Training_Set/hazy') test_GT = os.listdir(base_path+ '/data_set/Training_Set/GT') image = cv2.imread( base_path + "/data_set/Test_Set/Bridge.jpg",cv2.IMREAD_COLOR) # extracting the minmum value from 15 by 15 patch min_image = min_filter(image) # perfroming the minmin with 15by15 min filter dark_prior = rgb_min_image(min_image) # displaying the results fig, axes= plt.subplots(nrows=1, ncols=3,figsize=(20,5)) plt.suptitle('Stages of Dark channel') axes[0].imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB)) axes[0].set_title('original image') axes[1].imshow(cv2.cvtColor(min_image,cv2.COLOR_BGR2RGB)) axes[1].set_title('The min 15 patch image',) axes[2].imshow(dark_prior,cmap='gray') axes[2].set_title('The dark prior') interactive(True) plt.show() A = A_estimator(image,dark_prior) fig, axes= plt.subplots(nrows=1, ncols=3,figsize=(20,5)) axes[0].imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB)) axes[0].set_title('original image') axes[1].imshow(A,cmap='gray') axes[1].set_title('The Ambiance image') Transmition_image = transmition_map(image,A,0.95) axes[2].imshow(Transmition_image,cmap='gray') axes[2].set_title('The transmitance image') plt.show() fig, axes= plt.subplots(nrows=1, ncols=3,figsize=(20,5)) radience_image = Radience_cal(image,A,Transmition_image,0.1) axes[0].imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB)) axes[0].set_title('original image') axes[1].imshow(Transmition_image,cmap='gray') axes[1].set_title('The transmitance image') axes[2].imshow(cv2.cvtColor(radience_image,cv2.COLOR_BGR2RGB)) axes[2].set_title('Haze Free image') plt.show() epsilon = 10**-8 img_gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) # refined the transmition map using the guide filter refine_Transmission_image=guided_filter(img_gray.astype(np.float32),Transmition_image.astype(np.float32),100,epsilon) refine_radience_image = Radience_cal(image,A,refine_Transmission_image,0.1) # diplaying the refined results fig, axes= plt.subplots(nrows=1, ncols=3,figsize=(20,5)) radience_image = Radience_cal(image,A,Transmition_image,0.1) axes[0].imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB)) axes[0].set_title('original image') axes[1].imshow(refine_Transmission_image,cmap='gray') axes[1].set_title('The Refine Transmitance image') axes[2].imshow(cv2.cvtColor(refine_radience_image,cv2.COLOR_BGR2RGB)) axes[2].set_title('Haze Free image') interactive(False) plt.show()
import cv2 import numpy as np roi = cv2.imread('rose_red.png') hsv = cv2.cvtColor(roi,cv2.COLOR_BGR2HSV) target = cv2.imread('rose.png') hsvt = cv2.cvtColor(target,cv2.COLOR_BGR2HSV) # calculating object histogram roihist = cv2.calcHist([hsv],[0, 1], None, [180, 256], [0, 180, 0, 256] ) # normalize histogram and apply backprojection cv2.normalize(roihist,roihist,0,255,cv2.NORM_MINMAX) dst = cv2.calcBackProject([hsvt],[0,1],roihist,[0,180,0,256],1) # Now convolute with circular disc disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5)) cv2.filter2D(dst,-1,disc,dst) # threshold and binary AND ret,thresh = cv2.threshold(dst,50,255,0) thresh = cv2.merge((thresh,thresh,thresh)) res = cv2.bitwise_and(target,thresh) res = np.vstack((target,thresh,res)) cv2.imwrite('res.jpg',res)
# coding=utf-8 import time from pathlib import Path from typing import List, Dict, Union, Tuple import pandas as pd import numpy as np from matplotlib import rcParams rcParams["font.family"] = "monospace" from matplotlib import pyplot as plt import argparse import scipy.stats # plt.xkcd() # set std params here because lazy color_xa = "xkcd:pastel orange" color_xb = "xkcd:pastel pink" color_xc = "xkcd:forest green" # lib functions def nice_string_output( names: List[str], values: List[str], extra_spacing: int = 0, ): max_values = len(max(values, key=len)) max_names = len(max(names, key=len)) string = "" for name, value in zip(names, values): string += "{0:s} {1:>{spacing}} \n".format( name, value, spacing=extra_spacing + max_values + max_names - len(name), ) return string[:-2] def calculate_joint_rates_and_prob(*rates: Tuple[float]) -> Tuple[float, float]: """ Returns joint rate and probability """ K = np.sum(rates)[0] P = 1.0 / K return K, P def run(args: Dict): TEST = args["test"] max_time = args["maxtime"] rate_production = args["alpha"] volume = args["volume"] rate_degradation = args["gamma"] plot_range = (0, 20) curr_time = 0 curr_n_mrna = 0 storage_mrna = [0] storage_time = [0] storage_steps = [] while curr_time < max_time: curr_r_prod = rate_production * volume curr_r_deg = rate_degradation * curr_n_mrna # 1. calculate K K = sum((curr_r_deg, curr_r_prod)) # 2. draw a random number a and find duration until next event bu -ln a /K a = np.random.uniform(0, 1, size=1)[0] tau = -np.log(a) / K # 3. Draw a new number b b = np.random.uniform(0, 1, size=1)[0] # TODO: make this argmin based for n r if b < curr_r_prod / K: # produce curr_n_mrna += 1 if TEST: print(f"Produced one MRNA! Count: {curr_n_mrna}") else: curr_n_mrna -= 1 if TEST: print(f"Degraded one MRNA! Count: {curr_n_mrna}") curr_time += tau if TEST: print(f"Incremented time by: {tau:.2f} --> {curr_time:.2f}") storage_mrna.append(curr_n_mrna) storage_time.append(curr_time) storage_steps.append(tau) storage_mrna.pop() storage_time.pop() df = pd.DataFrame({"Time": storage_time, "N_MRNA": storage_mrna, "Steps":storage_steps}) print(df.tail()) print(f"N steps:{len(df)}") # plotting n_mean = sum(df["N_MRNA"] * df["Steps"]) / max_time param_str = nice_string_output( names=["alpha", "gamma", "volume", "max_time"], values=[ f"{v:.2f}" for v in (args["alpha"], args["gamma"], args["volume"], args["maxtime"]) ], ) poisson_lambda = args["alpha"] * args["volume"] / args["gamma"] result_str = nice_string_output( names=[r"Poisson param", "Mean", "Var", "Fano", "N steps", "Mean Step"], values=[ f"{v:.2f}" for v in ( poisson_lambda, n_mean, # df["N_MRNA"].mean(), df["N_MRNA"].var(), df["N_MRNA"].var() / n_mean, len(df), df["Time"].diff().mean(), ) ], extra_spacing=2, ) print(param_str) print(result_str) ax_dist: plt.Axes ax_traj: plt.Axes fig: plt.Figure fig, (ax_traj, ax_dist) = plt.subplots(nrows=2, figsize=(6, 8)) ax_traj.set_title("Time plot of Gillespie Algorithm") ax_traj.scatter( df["Time"], df["N_MRNA"], label="Simulated", c=color_xa, alpha=0.7, marker="2" ) ax_traj.axhline(poisson_lambda, ls="-.", label="Analytical Mean") ax_traj.axhspan( poisson_lambda - poisson_lambda ** 0.5, poisson_lambda + poisson_lambda ** 0.5, ls="-.", label="Analytical STD", alpha=0.2, color=color_xc, ) ax_traj.set_ylabel("Number of MRNA") ax_traj.set_xlabel("Time") ax_traj.legend(loc="upper left") ax_dist.hist( df["N_MRNA"], histtype="step", bins=max(plot_range), range=plot_range, label="MRNA Number", color=color_xa, density=True, ) ax_dist.set_xlabel("Number of MRNA") ax_dist.set_ylabel("Density") ax_dist.set_title( rf"Poisson with $\lambda = {poisson_lambda:.3f}$ vs. Empirical Distribution" ) analytical_poisson = scipy.stats.poisson(poisson_lambda) analytical_poisson_x = np.arange(*plot_range) analytical_poisson_y = analytical_poisson.pmf(analytical_poisson_x) ax_dist.vlines( analytical_poisson_x + 0.5, 0, analytical_poisson_y, colors=color_xc, label="Analytical Poisson PMF", alpha=0.5, ) ax_dist.legend() ax_traj.text( s="PARAMS:\n" + param_str, x=max_time * 0.7, y=9, ) ax_dist.text( s="RESULTS:\n" + result_str, y=0.05, x=12, ) fig.tight_layout() basename = args["out_name"] + "_simulation." + args["filetype"] fig.savefig(args["outdir"] / basename) fig.clf() def parse_arguments() -> Dict[str, Union[int, float, str, Path]]: parser = argparse.ArgumentParser(description="Chapter 7 exercise 8 code") parser.add_argument("-a", "--alpha", type=float, default=3.0) parser.add_argument("-g", "--gamma", type=float, default=0.5) parser.add_argument("-v", "--volume", type=float, default=1.0) parser.add_argument("-t", "--maxtime", type=int, default=1000) parser.add_argument("-o", "--outdir", type=str, default="./figs") parser.add_argument("--filetype", type=str, choices=["png", "pdf"], default="pdf") parser.add_argument("--test", type=bool, default=False) # change to false once done args = parser.parse_args() argdict = vars(args) # returns a dict, easier to deal with if argdict["test"] == True: argdict["maxtime"] = 100 po = Path(argdict["outdir"]) if not po.exists(): po.mkdir() print("Set output dir to: " + str(po.absolute())) argdict["outdir"] = po # Set output name here timestr = timestr = time.strftime("%Y%m%d_%H%M%S") out_str = f"ch7_e8_" + timestr argdict["out_name"] = out_str return argdict if __name__ == "__main__": args = parse_arguments() run(args)
import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd from IPython.display import display_html, HTML def data_set(): pd.set_option('display.max_columns', None) df = pd.read_stata("data/oreopoulos_resume_study_replication_data_file.dta") #fill_values = {'chinese': 0, 'indian': 0, "british":0, 'pakistani': 0, 'pakistani':0, "Chn_Cdn":0, "same_exp":0} #df.fillna(value=fill_values, inplace=True) return df def display_side_by_side(*args): html_str='' for df in args: html_str+=df.to_html() display_html(html_str.replace('table','table style="display:inline"'),raw=True) ############################################## Section 3 ############################################################## def second_tableA(): df = data_set() x1 = df[(df["type"] == 0) & (df["female"] == 0)].groupby(["name_ethnicity", "name"]).apply(len) x2 = df[(df["type"] == 1) & (df["female"] == 0)].groupby(["name_ethnicity", "name"]).apply(len) x3 = df[(df["type"] == 2) & (df["female"] == 0)].groupby(["name_ethnicity", "name"]).apply(len) x4 = df[(df["type"] == 3) & (df["female"] == 0)].groupby(["name_ethnicity", "name"]).apply(len) x5 = df[(df["type"] == 4) & (df["female"] == 0)].groupby(["name_ethnicity", "name"]).apply(len) y1 = df[(df["type"] == 0) & (df["female"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y2 = df[(df["type"] == 1) & (df["female"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y3 = df[(df["type"] == 2) & (df["female"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y4 = df[(df["type"] == 3) & (df["female"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y5 = df[(df["type"] == 4) & (df["female"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) pd.options.display.float_format = '{:,.0f}'.format ma_names = np.transpose(pd.DataFrame([x1,x2,x3,x4,x5], index=["Type 0","Type 1", "Type 2", "Type 3", "Type 4"])).fillna(value=" ") ma_names = ma_names.rename_axis(["Name ethnicity", "Names"]) fe_names = np.transpose(pd.DataFrame([y1,y2,y3,y4,y5], index=["Type 0","Type 1", "Type 2", "Type 3", "Type 4"])).fillna(value=" ") fe_names = fe_names.rename_axis(["Name ethnicity", "Names"]) display_side_by_side(ma_names, fe_names) def second_tableB(): df = data_set() x1 = df[(df["type"] == 0) & (df["female"] == 0) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) x2 = df[(df["type"] == 1) & (df["female"] == 0) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) x3 = df[(df["type"] == 2) & (df["female"] == 0) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) x4 = df[(df["type"] == 3) & (df["female"] == 0) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) x5 = df[(df["type"] == 4) & (df["female"] == 0) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y1 = df[(df["type"] == 0) & (df["female"] == 1) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y2 = df[(df["type"] == 1) & (df["female"] == 1) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y3 = df[(df["type"] == 2) & (df["female"] == 1) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y4 = df[(df["type"] == 3) & (df["female"] == 1) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) y5 = df[(df["type"] == 4) & (df["female"] == 1) & (df["callback"] == 1)].groupby(["name_ethnicity", "name"]).apply(len) pd.options.display.float_format = '{:,.0f}'.format ma_names = np.transpose(pd.DataFrame([x1,x2,x3,x4,x5], index=["Type 0","Type 1", "Type 2", "Type 3", "Type 4"])).fillna(value=" ") ma_names = ma_names.rename_axis(["Name ethnicity", "Names"]) fe_names = np.transpose(pd.DataFrame([y1,y2,y3,y4,y5], index=["Type 0","Type 1", "Type 2", "Type 3", "Type 4"])).fillna(value=" ") fe_names = fe_names.rename_axis(["Name ethnicity", "Names"]) display_side_by_side(ma_names, fe_names) def third_table(df = data_set()): frame = [[], [], [], [], [], [], [], [], []] X = [] index=["Female", "Top 200 world ranking university", "Extra curricular activities listed", "Fluent in French and other languages", "Canadian master’s degree", "High quality work experience", "List Canadian references", "Accreditation of foreign education", "Permanent resident indicated"] columns = ["Type 0", "Type 1", "Type 2", "Type 3", "Type 4"] types = [0,1,2,3,4,5] for i in types: if i in [0,1,2,3,4]: frame[0].append(len(df[(df["type"] == i) & (df["female"] == 1)])/len(df[df["type"] == i])) frame[1].append(len(df[(df["type"] == i) & (df["ba_quality"] == 1)])/len(df[df["type"] == i])) frame[2].append(len(df[(df["type"] == i) & (df["extracurricular_skills"] == 1)])/len(df[df["type"] == i])) frame[3].append(len(df[(df["type"] == i) & (df["language_skills"] == 1)])/len(df[df["type"] == i])) frame[4].append(len(df[(df["type"] == i) & (df["ma"] == 1)])/len(df[df["type"] == i])) frame[5].append(len(df[(df["type"] == i) & (df["exp_highquality"] == 1)])/len(df[df["type"] == i])) frame[6].append(len(df[(df["type"] == i) & (df["reference"] == 1)])/len(df[df["type"] == i])) frame[7].append(len(df[(df["type"] == i) & (df["accreditation"] == 1)])/len(df[df["type"] == i])) frame[8].append(len(df[(df["type"] == i) & (df["legal"] == 1)])/len(df[df["type"] == i])) else: X.append(len(df[df["female"] == 1])/len(df)) X.append(len(df[df["ba_quality"] == 1])/len(df)) X.append(len(df[df["extracurricular_skills"] == 1])/len(df)) X.append(len(df[df["language_skills"] == 1])/len(df)) X.append(len(df[df["ma"] == 1])/len(df)) X.append(len(df[df["exp_highquality"] == 1])/len(df)) X.append(len(df[df["reference"] == 1])/len(df)) X.append(len(df[df["accreditation"] == 1])/len(df)) X.append(len(df[df["legal"] == 1])/len(df)) pd.options.display.float_format = '{:,.3f}'.format fr1 = pd.DataFrame(frame, columns=columns, index=index) fr2 = pd.DataFrame(X, columns=["Full sample"], index=index) third_table = pd.concat([fr2,fr1], axis=1) third_table = third_table.rename_axis("Charactersistics of resume") dfp = pd.DataFrame(df["name_ethnicity"].value_counts()/len(df)) dfp.set_axis(["Full sample"], axis="columns",inplace=True) X = [] for t in [0,1,2,3,4]: X.append(df[df["type"] == t].groupby("name_ethnicity").count()["firmid"]/len(df[df["type"] == t])) dfd = np.transpose(pd.DataFrame(X,index=("Type 0","Type 1","Type 2","Type 3","Type 4"))) share_name = pd.concat([dfp,dfd], axis=1) share_name.style.set_caption("Test") share_name = share_name.rename_axis("Name ethnicity") display_side_by_side(third_table,share_name.fillna(0)) def count_name_frequency(): df = data_set() sns.set_palette("cubehelix",1) sns.set_style("whitegrid") plt.figure(num=None, figsize=(9,6)) fig = sns.countplot(data=df,x="name_ethnicity", hue="female") plt.xlabel("Name ethnicity") plt.ylabel("Frequency") plt.title("Frequency of Names") plt.show(fig)
#!/usr/bin/env python # -*- coding: utf-8 -*- """Method Manager for Electrical Resistivity Tomography (ERT)""" import os.path import numpy as np import matplotlib.pyplot as plt import pygimli as pg from pygimli.frameworks import MeshMethodManager from .ertModelling import ERTModelling, ERTModellingReference from .ert import createInversionMesh, createGeometricFactors, estimateError from pygimli.utils import getSavePath class ERTManager(MeshMethodManager): """ERT Manager. Method Manager for Electrical Resistivity Tomography (ERT) Todo ---- * 3d * 3dtopo * complex on/off * closed geometry * transdim * singularity removal * ERT specific inversion options: * ... """ def __init__(self, data=None, **kwargs): """Create ERT Manager instance. Parameters ---------- data: :gimliapi:`GIMLI::DataContainerERT` | str You can initialize the Manager with data or give them a dataset when calling the inversion. Other Parameters ---------------- * useBert: bool [True] Use Bert forward operator instead of the reference implementation. * sr: bool [True] Calculate with singularity removal technique. Recommended but needs the primary potential. For flat earth cases the primary potential will be calculated analytical. For domains with topography the primary potential will be calculated numerical using a p2 refined mesh or you provide primary potentials with setPrimPot. """ self.useBert = kwargs.pop('useBert', True) self.sr = kwargs.pop('sr', True) super().__init__(data=data, **kwargs) self.inv.dataTrans = pg.trans.TransLogLU() def setSingularityRemoval(self, sr=True): """Turn singularity removal on or off.""" self.reinitForwardOperator(sr=True) def createForwardOperator(self, **kwargs): """Create and choose forward operator.""" verbose = kwargs.pop('verbose', False) self.useBert = kwargs.pop('useBert', self.useBert) self.sr = kwargs.pop('sr', self.sr) if self.useBert: pg.verbose('Create ERTModelling FOP') fop = ERTModelling(sr=self.sr, verbose=verbose) else: pg.verbose('Create ERTModellingReference FOP') fop = ERTModellingReference(**kwargs) return fop def load(self, fileName): """Load ERT data. Forwarded to :py:mod:`pygimli.physics.ert.load` Parameters ---------- fileName: str Filename for the data. Returns ------- data: :gimliapi:`GIMLI::DataContainerERT` """ self.data = pg.physics.ert.load(fileName) return self.data def createMesh(self, data=None, **kwargs): """Create default inversion mesh. Forwarded to :py:mod:`pygimli.physics.ert.createInversionMesh` """ d = data or self.data if d is None: pg.critical('Please provide a data file for mesh generation') return createInversionMesh(d, **kwargs) def setPrimPot(self, pot): """Set primary potential from external is not supported anymore.""" pg.critical("Not implemented.") def simulate(self, mesh, scheme, res, **kwargs): """Simulate an ERT measurement. Perform the forward task for a given mesh, resistivity distribution & measuring scheme and return data (apparent resistivity) or potentials. For complex resistivity, the apparent resistivities is complex as well. The forward operator itself only calculates potential values for the electrodes in the given data scheme. To calculate apparent resistivities, geometric factors (k) are needed. If there are no values k in the DataContainerERT scheme, the function tries to calculate them, either analytically or numerically by using a p2-refined version of the given mesh. TODO ---- * 2D + Complex + SR Args ---- mesh : :gimliapi:`GIMLI::Mesh` 2D or 3D Mesh to calculate for. res : float, array(mesh.cellCount()) | array(N, mesh.cellCount()) | list Resistivity distribution for the given mesh cells can be: . float for homogeneous resistivity (e.g. 1.0) . single array of length mesh.cellCount() . matrix of N resistivity distributions of length mesh.cellCount() . resistivity map as [[regionMarker0, res0], [regionMarker0, res1], ...] scheme : :gimliapi:`GIMLI::DataContainerERT` Data measurement scheme. Keyword Args ------------ verbose: bool[False] Be verbose. Will override class settings. calcOnly: bool [False] Use fop.calculate instead of fop.response. Useful if you want to force the calculation of impedances for homogeneous models. No noise handling. Solution is put as token 'u' in the returned DataContainerERT. noiseLevel: float [0.0] add normally distributed noise based on scheme['err'] or on noiseLevel if error>0 is not contained noiseAbs: float [0.0] Absolute voltage error in V returnArray: bool [False] Returns an array of apparent resistivities instead of a DataContainerERT returnFields: bool [False] Returns a matrix of all potential values (per mesh nodes) for each injection electrodes. Returns ------- DataContainerERT | array(data.size()) | array(N, data.size()) | array(N, mesh.nodeCount()): Data container with resulting apparent resistivity data and errors (if noiseLevel or noiseAbs is set). Optional returns a Matrix of rhoa values (for returnArray==True forces noiseLevel=0). In case of a complex valued resistivity model, phase values are returned in the DataContainerERT (see example below), or as an additionally returned array. Examples -------- # >>> from pygimli.physics import ert # >>> import pygimli as pg # >>> import pygimli.meshtools as mt # >>> world = mt.createWorld(start=[-50, 0], end=[50, -50], # ... layers=[-1, -5], worldMarker=True) # >>> scheme = ert.createData( # ... elecs=pg.utils.grange(start=-10, end=10, n=21), # ... schemeName='dd') # >>> for pos in scheme.sensorPositions(): # ... _= world.createNode(pos) # ... _= world.createNode(pos + [0.0, -0.1]) # >>> mesh = mt.createMesh(world, quality=34) # >>> rhomap = [ # ... [1, 100. + 0j], # ... [2, 50. + 0j], # ... [3, 10.+ 0j], # ... ] # >>> data = ert.simulate(mesh, res=rhomap, scheme=scheme, verbose=1) # >>> rhoa = data.get('rhoa').array() # >>> phia = data.get('phia').array() """ verbose = kwargs.pop('verbose', self.verbose) calcOnly = kwargs.pop('calcOnly', False) returnFields = kwargs.pop("returnFields", False) returnArray = kwargs.pop('returnArray', False) noiseLevel = kwargs.pop('noiseLevel', 0.0) noiseAbs = kwargs.pop('noiseAbs', 1e-4) seed = kwargs.pop('seed', None) sr = kwargs.pop('sr', self.sr) # segfaults with self.fop (test & fix) fop = self.createForwardOperator(useBert=self.useBert, sr=sr, verbose=verbose) fop.data = scheme fop.setMesh(mesh, ignoreRegionManager=True) rhoa = None phia = None isArrayData = False # parse the given res into mesh-cell-sized array if isinstance(res, int) or isinstance(res, float): res = np.ones(mesh.cellCount()) * float(res) elif isinstance(res, complex): res = np.ones(mesh.cellCount()) * res elif hasattr(res[0], '__iter__'): # ndim == 2 if len(res[0]) == 2: # res seems to be a res map # check if there are markers in the mesh that are not defined # the rhomap. better signal here before it results in errors meshMarkers = list(set(mesh.cellMarkers())) mapMarkers = [m[0] for m in res] if any([mark not in mapMarkers for mark in meshMarkers]): left = [m for m in meshMarkers if m not in mapMarkers] pg.critical("Mesh contains markers without assigned " "resistivities {}. Please fix given " "rhomap.".format(left)) res = pg.solver.parseArgToArray(res, mesh.cellCount(), mesh) else: # probably nData x nCells array # better check for array data here isArrayData = True if isinstance(res[0], np.complex) or isinstance(res, pg.CVector): pg.info("Complex resistivity values found.") fop.setComplex(True) else: fop.setComplex(False) if not scheme.allNonZero('k') and not calcOnly: if verbose: pg.info('Calculate geometric factors.') scheme.set('k', fop.calcGeometricFactor(scheme)) ret = pg.DataContainerERT(scheme) # just to be sure that we don't work with artifacts ret['u'] *= 0.0 ret['i'] *= 0.0 ret['r'] *= 0.0 if isArrayData: rhoa = np.zeros((len(res), scheme.size())) for i, r in enumerate(res): rhoa[i] = fop.response(r) if verbose: print(i, "/", len(res), " : ", pg.dur(), "s", "min r:", min(r), "max r:", max(r), "min r_a:", min(rhoa[i]), "max r_a:", max(rhoa[i])) else: # res is single resistivity array if len(res) == mesh.cellCount(): if calcOnly: fop.mapERTModel(res, 0) dMap = pg.core.DataMap() fop.calculate(dMap) if fop.complex(): pg.critical('Implement me') else: ret["u"] = dMap.data(scheme) ret["i"] = np.ones(ret.size()) if returnFields: return pg.Matrix(fop.solution()) return ret else: if fop.complex(): res = pg.utils.squeezeComplex(res) resp = fop.response(res) if fop.complex(): rhoa, phia = pg.utils.toPolar(resp) else: rhoa = resp else: print(mesh) print("res: ", res) raise BaseException( "Simulate called with wrong resistivity array.") if not isArrayData: ret['rhoa'] = rhoa if phia is not None: ret.set('phia', phia) else: ret.set('rhoa', rhoa[0]) if phia is not None: ret.set('phia', phia[0]) if returnFields: return pg.Matrix(fop.solution()) if noiseLevel > 0: # if errors in data noiseLevel=1 just triggers if not ret.allNonZero('err'): # 1A and #100µV ret.set('err', self.estimateError(ret, relativeError=noiseLevel, absoluteUError=noiseAbs, absoluteCurrent=1)) print("Data error estimate (min:max) ", min(ret('err')), ":", max(ret('err'))) rhoa *= 1. + pg.randn(ret.size(), seed=seed) * ret('err') ret.set('rhoa', rhoa) ipError = None if phia is not None: if scheme.allNonZero('iperr'): ipError = scheme('iperr') else: # np.abs(self.data("phia") +TOLERANCE) * 1e-4absoluteError if noiseLevel > 0.5: noiseLevel /= 100. if 'phiErr' in kwargs: ipError = np.ones(ret.size()) * kwargs.pop('phiErr') \ / 1000 else: ipError = abs(ret["phia"]) * noiseLevel if verbose: print("Data IP abs error estimate (min:max) ", min(ipError), ":", max(ipError)) phia += pg.randn(ret.size(), seed=seed) * ipError ret['iperr'] = ipError ret['phia'] = phia # check what needs to be setup and returned if returnArray: if phia is not None: return rhoa, phia else: return rhoa return ret def checkData(self, data=None): """Return data from container. THINKABOUT: Data will be changed, or should the manager keep a copy? """ data = data or pg.DataContainerERT(self.data) if isinstance(data, pg.DataContainer): if not data.allNonZero('k'): pg.warn("Data file contains no geometric factors (token='k').") data['k'] = createGeometricFactors(data, verbose=True) if self.fop.complex(): if not data.haveData('rhoa'): pg.critical('Datacontainer have no "rhoa" values.') if not data.haveData('ip'): pg.critical('Datacontainer have no "ip" values.') # pg.warn('check sign of phases') rhoa = data['rhoa'] phia = -data['ip']/1000 # 'ip' is defined for neg mrad. # we should think about some 'phia' in rad return pg.utils.squeezeComplex(pg.utils.toComplex(rhoa, phia)) else: if not data.haveData('rhoa'): if data.allNonZero('r'): pg.info("Creating apparent resistivies from " "impedences rhoa = r * k") data['rhoa'] = data['r'] * data['k'] elif data.allNonZero('u') and data.allNonZero('i'): pg.info("Creating apparent resistivies from " "voltage and currrent rhoa = u/i * k") data['rhoa'] = data['u']/data['i'] * data['k'] else: pg.critical("Datacontainer have neither: " "apparent resistivies 'rhoa', " "or impedances 'r', " "or voltage 'u' along with current 'i'.") if any(data['rhoa'] < 0) and \ isinstance(self.inv.dataTrans, pg.core.TransLog): print(pg.find(data['rhoa'] < 0)) print(data['rhoa'][data['rhoa'] < 0]) pg.critical("Found negative apparent resistivities. " "These can't be processed with logarithmic " "data transformation. You should consider to " "filter them out using " "data.remove(data['rhoa'] < 0).") return data['rhoa'] return data def checkErrors(self, err, dataVals): """Return relative error. Default we assume 'err' are relative vales. """ if isinstance(err, pg.DataContainer): rae = None if not err.allNonZero('err'): pg.warn("Datacontainer have no 'err' values. " "Fallback of 1mV + 3% using " "ERTManager.estimateError(...) ") rae = self.estimateError(err, absoluteError=0.001, relativeError=0.03) else: rae = err['err'] if self.fop.complex(): ipe = None if err.haveData('iperr'): amp, phi = pg.utils.toPolar(dataVals) # assuming ipErr are absolute dPhi in mrad ipe = err['iperr'] / abs((phi*1000)) else: pg.warn("Datacontainer have no 'iperr' values. " "Fallback set to 0.01") ipe = np.ones(err.size()) * 0.01 return pg.cat(rae, ipe) return rae # not set if err is no DataContainer (else missing) def estimateError(self, data=None, **kwargs): """Estimate error composed of an absolute and a relative part. Parameters ---------- absoluteError : float [0.001] Absolute data error in Ohm m. Need 'rhoa' values in data. relativeError : float [0.03] relative error level in %/100 absoluteUError : float [0.001] Absolute potential error in V. Need 'u' values in data. Or calculate them from 'rhoa', 'k' and absoluteCurrent if no 'i' is given absoluteCurrent : float [0.1] Current level in A for reconstruction for absolute potential V Returns ------- error : Array """ if data is None: # error = estimateError(self.data, **kwargs) self.data["err"] = error else: # the old way: better use ert.estimateError directly error = estimateError(data, **kwargs) return error def coverage(self): """Coverage vector considering the logarithmic transformation.""" covTrans = pg.core.coverageDCtrans(self.fop.jacobian(), 1.0 / self.inv.response, 1.0 / self.inv.model) paramSizes = np.zeros(len(self.inv.model)) for c in self.fop.paraDomain.cells(): paramSizes[c.marker()] += c.size() return np.log10(covTrans / paramSizes) def standardizedCoverage(self, threshhold=0.01): """Return standardized coverage vector (0|1) using thresholding.""" return 1.0*(abs(self.coverage()) > threshhold) def saveResult(self, folder=None, size=(16, 10), **kwargs): """Save all results in the specified folder. Saved items are: Inverted profile Resistivity vector Coverage vector Standardized coverage vector Mesh (bms and vtk with results) """ subfolder = self.__class__.__name__ path = getSavePath(folder, subfolder) pg.info('Saving resistivity data to: {}'.format(path)) np.savetxt(path + '/resistivity.vector', self.model) np.savetxt(path + '/resistivity-cov.vector', self.coverage()) np.savetxt(path + '/resistivity-scov.vector', self.standardizedCoverage()) m = pg.Mesh(self.paraDomain) m['Resistivity'] = self.paraModel(self.model) m['Resistivity (log10)'] = np.log10(m['Resistivity']) m['Coverage'] = self.coverage() m['S_Coverage'] = self.standardizedCoverage() m.exportVTK(os.path.join(path, 'resistivity')) m.saveBinaryV2(os.path.join(path, 'resistivity-pd')) self.fop.mesh().save(os.path.join(path, 'resistivity-mesh')) if self.paraDomain.dim() == 2: fig, ax = plt.subplots(figsize=size) self.showResult(ax=ax, coverage=self.coverage(), **kwargs) fig.savefig(path + '/resistivity.pdf', bbox_inches="tight") return path, fig, ax return path if __name__ == "__main__": pass
import numpy as np import matplotlib.pyplot as plt # returns date as String YYYYMMDDHHMM and rainfall as float32 def get_data(line, dateIndex=1, rainfallIndex=3): a = line.split(";") return (a[dateIndex], np.float32(a[rainfallIndex])) # change representation of String from .txt file def convert_date_pretty(date): return "{}.{}.{} {}:{}".format(date[6:8], date[4:6], date[0:4], date[8:10], date[10:12]) # generate some statistics about selected DWD .txt File def gen_statistic(path, bins=300, show=False): assert PATH.__contains__("02712") # check ID is Valid! rainsum_month = np.zeros(12, dtype=np.float32) rainsum_reference = np.array([100,20,50,35,50,50,40,40,45,28,0,0]) all_data = [] sum = 0 n_rain = 0 max_value = 0 date = "NEVER" # open file fp = open(path, "r") headers = fp.readline() for line in fp: datum, value = get_data(line) all_data.append(value) sum += value if value > 0.0: n_rain += 1 month = int(datum[4:6])-1 rainsum_month[month] += value if value > max_value: max_value = value date = datum if value > 2.50: print(convert_date_pretty(datum), value) fp.close() if show: plt.hist(all_data, bins=bins, log=True) plt.show() np_data = np.array(all_data, dtype=np.float32) print("Ausgewertetes File :", path) # ToDo: 0.99 Quantil? print("maximalwert: {:1.2f} erreicht am {}".format(max_value, date)) print("Regen/gesamt: {:1.2f}%".format(int(n_rain * 100 / np_data.size))) print("gesamt Regenmenge: {:1.2f}".format(np_data.sum())) print("Durchschnittliche regenmenge: {:1.2f}".format(np_data.sum() / np_data.size)) print("Durchschnittliche Regenmenge (bei Regen): {:1.2f}".format(np_data.sum() / n_rain)) print("Monat\tReferenz\tmessung\tdiff") for i in range(12): rel_err = -1 if rainsum_reference[i]>0: rel_err = abs((rainsum_reference[i]-rainsum_month[i])/rainsum_reference[i]) print("{}:\t{}\t{:1.2f}\t{:1.2f}".format(i, rainsum_reference[i], rainsum_month[i], rel_err)) return if __name__ == '__main__': # Path to local copy PATH = "produkt_ein_min_rr_20180101_20181125_02712" gen_statistic(PATH + ".txt") # resultat zwei werte > 2.50
import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn from sklearn.preprocessing import Imputer from keras.models import Sequential from keras.layers import Dense import numpy from keras.models import Sequential from keras.layers import Dense #loading the model df=pd.read_csv('maternal_mortality.csv') df.fillna(df.mean(), inplace=True) print(df.head) df.to_csv('new.csv') #print shape of the data #print(df.shape) numpy.random.seed(5) X = df[:,0:7] Y = df[:,7] #model layers model = Sequential() model.add(Dense(12, input_dim=7, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='softmax')) #compiling the model model.compile(Adam(lr=0.0001), loss='binary_crossentropy', metrics=['accuracy']) #fitting the model model.fit(X, Y, epochs=1000, batch_size=10) #evaluate the model scores = model.evaluate(X,Y) print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
import pandas as pd import numpy as np from sklearn.utils import shuffle df = pd.read_csv("../allchest.csv") df_list = list(map(lambda x: df[df['ID'] == x], [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17])) df_list = list(map(lambda x: x[x['label'] <= 4], df_list)) del df train = [] test = [] for df in df_list: df.reset_index(inplace=True, drop=True) df = shuffle(df, random_state=42) df.reset_index(inplace=True, drop=True) train.append(df.iloc[:int(0.6 * df.shape[0]), :]) test.append(df.iloc[int(0.6 * df.shape[0]):, :]) del df_list train = pd.concat(train, axis=0) test = pd.concat(test, axis=0) train.reset_index(inplace=True, drop=True) test.reset_index(inplace=True, drop=True) train = train[['label', 'ID', 'chestACCx', 'chestACCy', 'chestACCz', 'chestECG', 'chestEMG', 'chestEDA', 'chestTemp', 'chestResp']] test = test[['label', 'ID', 'chestACCx', 'chestACCy', 'chestACCz', 'chestECG', 'chestEMG', 'chestEDA', 'chestTemp', 'chestResp']] train = np.array(train, dtype=np.float32) test = np.array(test, dtype=np.float32) np.save("train.npy", train) np.save("test.npy", test)
import time import numpy as np from fmodeling.rock_physics.Mediums.DEM import DEM from fmodeling.rock_physics.Mediums.DEMSlb import DEM as DEMSlb def main(): # Matrix properties Km = 77.0 # GPa Gm = 32.0 # GPa rhom = 2.71 # g/cm3 # Fluid properties Kf = 3.0 # GPa rhof = 1.0 # g/cm3 # Porosity phimax = 1 phi = 0.1 alpha = 0.1 # Inclusion properties # In this example a mixture of three inclusion types are used: # - 30% of 0.02 aspect ratio # - 50% of 0.15 aspect ratio # - 20% of 0.80 aspect ratio alphas = np.array([0.01, 0.15, 0.8]) volumes = np.array([0.3, 0.5, 0.2]) * phimax time_point_1 = time.time() Km_DEM, Gm_DEM, phi_DEM = DEM(Km, Gm, np.array([0]), np.array([0]), np.array([alpha]), np.array([phi*phimax])) time_point_2 = time.time() Km_DEMSlb, Gm_DEMSlb = DEMSlb(Km, Gm, 0.0, 0.0, phi, alpha, phimax) time_point_3 = time.time() print(Km_DEM, Gm_DEM, phi_DEM) print(Km_DEMSlb, Gm_DEMSlb) print(f"Old calculation time: {time_point_2 - time_point_1}") print(f"New calculation time: {time_point_3 - time_point_2}") if __name__ == '__main__': main()
import threading import numpy as np import tensorflow as tf import pylab import time import gym import os from keras.layers import Dense, Input, Lambda from keras.models import Model from keras.optimizers import Adam from keras import backend as K # global variables for threading episode = 0 scores = [] EPISODES = 5000 model_path = os.path.join(os.getcwd(), 'save_model') graph_path = os.path.join(os.getcwd(), 'save_graph') if not os.path.isdir(model_path): os.mkdir(model_path) if not os.path.isdir(graph_path): os.mkdir(graph_path) # This is A3C(Asynchronous Advantage Actor Critic) agent(global) for the Cartpole # In this example, we use A3C algorithm class A3CAgent: def __init__(self, state_size, action_size, env_name): # get size of state and action self.state_size = state_size self.action_size = action_size # get gym environment name self.env_name = env_name # these are hyper parameters for the A3C self.actor_lr = 0.0001 self.critic_lr = 0.001 self.discount_factor = .9 self.hidden1, self.hidden2 = 24, 24 self.threads = 8 # create model for actor and critic network self.actor, self.critic = self.build_model() # method for training actor and critic network self.optimizer = [self.actor_optimizer(), self.critic_optimizer()] self.sess = tf.InteractiveSession() K.set_session(self.sess) self.sess.run(tf.global_variables_initializer()) # approximate policy and value using Neural Network # actor -> state is input and probability of each action is output of network # critic -> state is input and value of state is output of network # actor and critic network share first hidden layer def build_model(self): state = Input(batch_shape=(None, self.state_size)) actor_input = Dense(self.hidden1, input_dim=self.state_size, activation='relu')(state) actor_hidden = Dense(self.hidden2, activation='relu')(actor_input) mu_0 = Dense(self.action_size, activation='tanh')(actor_hidden) sigma_0 = Dense(self.action_size, activation='softplus')(actor_hidden) mu = Lambda(lambda x: x * 2)(mu_0) sigma = Lambda(lambda x: x + 0.0001)(sigma_0) critic_input = Dense(self.hidden1, input_dim=self.state_size, activation='relu')(state) value_hidden = Dense(self.hidden2, activation='relu', kernel_initializer='he_uniform')(critic_input) state_value = Dense(1, activation='linear', kernel_initializer='he_uniform')(value_hidden) actor = Model(inputs=state, outputs=(mu, sigma)) critic = Model(inputs=state, outputs=state_value) actor._make_predict_function() critic._make_predict_function() actor.summary() critic.summary() return actor, critic # make loss function for Policy Gradient # [log(action probability) * advantages] will be input for the back prop # we add entropy of action probability to loss def actor_optimizer(self): action = K.placeholder(shape=(None,1)) advantages = K.placeholder(shape=(None,1)) # mu = K.placeholder(shape=(None, self.action_size)) # sigma_sq = K.placeholder(shape=(None, self.action_size)) mu, sigma_sq = self.actor.output pdf = 1. / K.sqrt(2. * np.pi * sigma_sq) * K.exp(-K.square(action - mu) / (2. * sigma_sq)) log_pdf = K.log(pdf + K.epsilon()) entropy = K.sum(0.5 * (K.log(2. * np.pi * sigma_sq) + 1.)) exp_v = log_pdf * advantages exp_v = K.sum(exp_v + 0.01 * entropy) actor_loss = -exp_v optimizer = Adam(lr=self.actor_lr) updates = optimizer.get_updates(self.actor.trainable_weights, [], actor_loss) train = K.function([self.actor.input, action, advantages], [], updates=updates) return train # make loss function for Value approximation def critic_optimizer(self): discounted_reward = K.placeholder(shape=(None, 1)) value = self.critic.output loss = K.mean(K.square(discounted_reward - value)) optimizer = Adam(lr=self.critic_lr) updates = optimizer.get_updates(self.critic.trainable_weights, [], loss) train = K.function([self.critic.input, discounted_reward], [], updates=updates) return train # make agents(local) and start training def train(self): # self.load_model('./save_model/cartpole_a3c.h5') agents = [Agent(i, self.actor, self.critic, self.optimizer, self.env_name, self.discount_factor, self.action_size, self.state_size) for i in range(self.threads)] for agent in agents: agent.start() while True: time.sleep(20) plot = scores[:] pylab.plot(range(len(plot)), plot, 'b') pylab.savefig("./save_graph/cartpole_a3c.png") self.save_model('./save_model/cartpole_a3c.h5') def save_model(self, name): self.actor.save_weights(name + "_actor.h5") self.critic.save_weights(name + "_critic.h5") def load_model(self, name): self.actor.load_weights(name + "_actor.h5") self.critic.load_weights(name + "_critic.h5") # This is Agent(local) class for threading class Agent(threading.Thread): def __init__(self, index, actor, critic, optimizer, env_name, discount_factor, action_size, state_size): threading.Thread.__init__(self) self.states = [] self.rewards = [] self.actions = [] self.index = index self.actor = actor self.critic = critic self.optimizer = optimizer self.env_name = env_name self.discount_factor = discount_factor self.action_size = action_size self.state_size = state_size # Thread interactive with environment def run(self): global episode env = gym.make(self.env_name) while episode < EPISODES: state = env.reset() score = 0 step = 0 while True: action = self.get_action(state) next_state, reward, done, _ = env.step(action) reward /= 10 score += reward step += 1 state = list(state) action = action[0] self.memory(state, action, reward) state = next_state if done: episode += 1 print("episode: ", episode, "/ score : ", score, "/ step : ", step) scores.append(score) self.train_episode(score != 500) break # In Policy Gradient, Q function is not available. # Instead agent uses sample returns for evaluating policy def discount_rewards(self, rewards, done=True): discounted_rewards = np.zeros_like(rewards) running_add = 0 if not done: running_add = self.critic.predict(np.reshape(self.states[-1], (1, self.state_size)))[0] for t in reversed(range(0, len(rewards))): running_add = running_add * self.discount_factor + rewards[t] discounted_rewards[t] = running_add return discounted_rewards # save <s, a ,r> of each step # this is used for calculating discounted rewards def memory(self, state, action, reward): self.states.append(state) self.actions.append(action) self.rewards.append(reward) # update policy network and value network every episode def train_episode(self, done): discounted_rewards = self.discount_rewards(self.rewards, done) # print("states size : ",len(self.states)," ", len(self.states[0])) # print("actions_size : ",len(self.actions)) states = np.asarray(self.states, dtype='float32') values = self.critic.predict(states) # print("value : ", values.shape) # values = np.reshape(values, (len(values), 1)) # print("value2 : ", values.shape) advantages = discounted_rewards - values # action = np.array(self.actions) # print(action.shape) # print(advantages.shape) self.optimizer[0]([self.states, self.actions, advantages]) self.optimizer[1]([self.states, discounted_rewards]) self.states, self.actions, self.rewards = [], [], [] def get_action(self, state): mu, sigma_sq = self.actor.predict(np.reshape(state, [1, self.state_size])) # sigma_sq = np.log(np.exp(sigma_sq + 1)) epsilon = np.random.randn(self.action_size) action = mu + np.sqrt(sigma_sq) * epsilon action = np.clip(action, -2, 2) return action if __name__ == "__main__": # env_name = 'CartPole-v1' env_name = 'Pendulum-v0' env = gym.make(env_name) state_size = env.observation_space.shape[0] action_size = env.action_space.shape[0] print("action size : ", action_size) print("state size : ", state_size) action_bound = [env.action_space.low, env.action_space.high] action_gap = env.action_space.high - env.action_space.low env.close() global_agent = A3CAgent(state_size, action_size, env_name) global_agent.train()
import pandas as pd from pprint import pprint from glob import glob import numpy as np import re import csv import sys followUp = {} ack = {} nonIntimate = {} intimate = {} featureList = {} questionType_DND = {} questionType_PN = {} discriminativeVectors = [] nonDiscriminativeVectors = [] def readHelperData(): global followUp, ack, nonIntimate, intimate, questionType_PN, questionType_DND utterrances = pd.read_csv('data/misc/IdentifyingFollowUps.csv') disc_nondisc = pd.read_csv('data/misc/DND_Annotations.csv') pos_neg = pd.read_csv('data/misc/PN_Annotations.csv') # Discriminative/Non-discriminative annotations for i in xrange(len(disc_nondisc)): question = disc_nondisc.iloc[i]['Questions'] qType = disc_nondisc.iloc[i]['Annotations'] questionType_DND[question] = qType # Positive/Negative annotations for i in xrange(len(pos_neg)): question = pos_neg.iloc[i]['Questions'] qType = pos_neg.iloc[i]['Annotations'] questionType_PN[question] = qType for item in utterrances.itertuples(): if item[3] == "#follow_up" and item[1] not in followUp: followUp[item[1]] = item[2] elif item[3] == "#ack" and item[1] not in ack: ack[item[1]] = item[2] elif item[3] == "#non_int" and item[1] not in nonIntimate: nonIntimate[item[1]] = item[2] elif item[3] == "#int" and item[1] not in intimate: intimate[item[1]] = item[2] def readTranscript(): global featureList transcriptFiles = glob(sys.argv[1] + '[0-9][0-9][0-9]_P/[0-9][0-9][0-9]_TRANSCRIPT.csv') for i in range(0, len(transcriptFiles)): t = pd.read_csv(transcriptFiles[i], delimiter='\t') t = t.fillna("") captureStarted = False startTime = 0.0 endTime = 0.0 prevQuestion = "" participantNo = transcriptFiles[i][-18:-15] for j in xrange(len(t)): question = re.search(".*\((.*)\)$", t.iloc[j]['value']) if question is not None: question = question.group(1) else: question = t.iloc[j]['value'] question = question.strip() if t.iloc[j]['speaker'] == 'Ellie': if question in nonIntimate and captureStarted: if (participantNo, prevQuestion) not in featureList: featureList[(participantNo, prevQuestion)] = [startTime, endTime] else: featureList[(participantNo, prevQuestion)][1] = endTime captureStarted = False elif question in intimate and question in questionType_DND and captureStarted: if (participantNo, prevQuestion) not in featureList: featureList[(participantNo, prevQuestion)] = [startTime, endTime] else: featureList[(participantNo, prevQuestion)][1] = endTime startTime = t.iloc[j]['start_time'] endTime = t.iloc[j]['stop_time'] prevQuestion = question elif question in intimate and question in questionType_DND and not captureStarted: startTime = t.iloc[j]['start_time'] endTime = t.iloc[j]['stop_time'] prevQuestion = question captureStarted = True elif question in intimate and question not in questionType_DND and captureStarted: if (participantNo, prevQuestion) not in featureList: featureList[(participantNo, prevQuestion)] = [startTime, endTime] else: featureList[(participantNo, prevQuestion)][1] = endTime captureStarted = False elif question in followUp or question in ack and captureStarted: endTime = t.iloc[j]['stop_time'] elif t.iloc[j]['speaker'] == 'Participant' and captureStarted: endTime = t.iloc[j]['stop_time'] def readCLM_DND(): groupByQuestion = {} dFile = open('data/disc_nondisc/discriminative_CLM.csv', 'w') ndFile = open('data/disc_nondisc/nondiscriminative_CLM.csv', 'w') dWriter = csv.writer(dFile) ndWriter = csv.writer(ndFile) header = ["video", "question", "starttime", "endtime","frame", "timestamp", "confidence", "success", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16", "x17", "x18", "x19", "x20", "x21", "x22", "x23", "x24", "x25", "x26", "x27", "x28", "x29", "x30", "x31", "x32", "x33", "x34", "x35", "x36", "x37", "x38", "x39", "x40", "x41", "x42", "x43", "x44", "x45", "x46", "x47", "x48", "x49", "x50", "x51", "x52", "x53", "x54", "x55", "x56", "x57", "x58", "x59", "x60", "x61", "x62", "x63", "x64", "x65", "x66", "x67", "y0", "y1", "y2", "y3", "y4", "y5", "y6", "y7", "y8", "y9", "y10", "y11", "y12", "y13", "y14", "y15", "y16", "y17", "y18", "y19", "y20", "y21", "y22", "y23", "y24", "y25", "y26", "y27", "y28", "y29", "y30", "y31", "y32", "y33", "y34", "y35", "y36", "y37", "y38", "y39", "y40", "y41", "y42", "y43", "y44", "y45", "y46", "y47", "y48", "y49", "y50", "y51", "y52", "y53", "y54", "y55", "y56", "y57", "y58", "y59", "y60", "y61", "y62", "y63", "y64", "y65", "y66", "y67"] dWriter.writerow(header) ndWriter.writerow(header) for item in featureList: if item[0] not in groupByQuestion: groupByQuestion[item[0]] = [(item[1], featureList[item])] else: groupByQuestion[item[0]].append((item[1], featureList[item])) for item in groupByQuestion: fileName = sys.argv[1] + item + '_P/' + item + '_CLM_features.txt' f = pd.read_csv(fileName, delimiter=', ') for instance in groupByQuestion[item]: startTime = instance[1][0] endTime = instance[1][1] startFrame = f.ix[(f['timestamp'] - startTime).abs().argsort()[:1]].index.tolist()[0] endFrame = f.ix[(f['timestamp'] - endTime).abs().argsort()[:1]].index.tolist()[0] features = f.ix[startFrame:endFrame].mean(0).tolist() vector = instance[1][:] vector += features vector.insert(0, instance[0]) vector.insert(0, item) vector = np.asarray(vector) # print item, instance[0], startTime, endTime if questionType_DND[instance[0]] == 'D': dWriter.writerow(vector) else: ndWriter.writerow(vector) dFile.close() ndFile.close() def readCLM_PN(): groupByQuestion = {} pFile = open('data/pos_neg/positive_CLM.csv', 'w') nFile = open('data/pos_neg/negative_CLM.csv', 'w') pWriter = csv.writer(pFile) nWriter = csv.writer(nFile) header = ["video", "question", "starttime", "endtime","frame", "timestamp", "confidence", "success", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16", "x17", "x18", "x19", "x20", "x21", "x22", "x23", "x24", "x25", "x26", "x27", "x28", "x29", "x30", "x31", "x32", "x33", "x34", "x35", "x36", "x37", "x38", "x39", "x40", "x41", "x42", "x43", "x44", "x45", "x46", "x47", "x48", "x49", "x50", "x51", "x52", "x53", "x54", "x55", "x56", "x57", "x58", "x59", "x60", "x61", "x62", "x63", "x64", "x65", "x66", "x67", "y0", "y1", "y2", "y3", "y4", "y5", "y6", "y7", "y8", "y9", "y10", "y11", "y12", "y13", "y14", "y15", "y16", "y17", "y18", "y19", "y20", "y21", "y22", "y23", "y24", "y25", "y26", "y27", "y28", "y29", "y30", "y31", "y32", "y33", "y34", "y35", "y36", "y37", "y38", "y39", "y40", "y41", "y42", "y43", "y44", "y45", "y46", "y47", "y48", "y49", "y50", "y51", "y52", "y53", "y54", "y55", "y56", "y57", "y58", "y59", "y60", "y61", "y62", "y63", "y64", "y65", "y66", "y67"] pWriter.writerow(header) nWriter.writerow(header) for item in featureList: if item[0] not in groupByQuestion: groupByQuestion[item[0]] = [(item[1], featureList[item])] else: groupByQuestion[item[0]].append((item[1], featureList[item])) for item in groupByQuestion: fileName = sys.argv[1] + item + '_P/' + item + '_CLM_features.txt' f = pd.read_csv(fileName, delimiter=', ') for instance in groupByQuestion[item]: startTime = instance[1][0] endTime = instance[1][1] startFrame = f.ix[(f['timestamp'] - startTime).abs().argsort()[:1]].index.tolist()[0] endFrame = f.ix[(f['timestamp'] - endTime).abs().argsort()[:1]].index.tolist()[0] features = f.ix[startFrame:endFrame].mean(0).tolist() vector = instance[1][:] vector += features vector.insert(0, instance[0]) vector.insert(0, item) vector = np.asarray(vector) # print item, instance[0], startTime, endTime if questionType_PN[instance[0]] == 'P': pWriter.writerow(vector) else: nWriter.writerow(vector) pFile.close() nFile.close() if __name__ == "__main__": readHelperData() readTranscript() readCLM_DND() readCLM_PN()
import argparse import os import rosbag import tf import numpy as np if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("bag_file", help="input bag file") args = parser.parse_args() output_file = os.path.splitext(os.path.basename(args.bag_file))[0] + ".csv" bag = rosbag.Bag(args.bag_file) # matrix = np.empty([0, 10]) last_est = None # last_pos = None start_time = None print("t, statex, goalx, viconx, statez, goalz, viconz") for topic, msg, t in bag.read_messages(topics=['/cf06/log1']): time = msg.header.stamp.secs + msg.header.stamp.nsecs / 1e9 vals = [str(v) for v in msg.values]; print("{}, {}".format(time, ", ".join(vals))) # print(time) # print(msg.values) # if start_time is None: # start_time = t.to_sec() # print(msg) # if topic == "/cf06/log1": # for m in msg.transforms: # if m.child_frame_id == "/vicon/cf_config1/cf_config1": # if last_est: # # row = np.array([]) # row = np.append(row, t.to_sec() - start_time) # row = np.append(row, [m.transform.translation.x, m.transform.translation.y, m.transform.translation.z]) # row = np.append(row, [last_est.translation.x, last_est.translation.y, last_est.translation.z]) # quaternion = ( # m.transform.rotation.x, # m.transform.rotation.y, # m.transform.rotation.z, # m.transform.rotation.w) # euler = tf.transformations.euler_from_quaternion(quaternion) # row = np.append(row, [euler[0], euler[1], euler[2]]) # matrix = np.append(matrix, [row], axis=0) # if m.child_frame_id == "tracker_test1": # last_est = m.transform bag.close() # np.savetxt(output_file, matrix, delimiter=",", header="t,x_vicon,y_vicon,z_vicon,x_ext,y_est,z_est,roll_vicon,pitch_vicon,yaw_vicon")
import numpy as np import pandas as pd from random import random from sec_etl import * from reports.cash_flow.etl import wrapper_get_cashflow_df_and_col from reports.cash_flow.investing_activity import wrapper_get_ia_values_and_label from reports.cash_flow.operating_activity import wrapper_get_oa_values_and_label from reports.document_entity_information.total_outstanding_shares import get_total_outstanding_shares import time from utils import make_cashflow_colnames, update_dict_with_cashflow_vals, get_save_path, get_fname_from_sourcedata # For windows OSX = False # User selected file to process fname = get_fname_from_sourcedata(OSX) tickers = load_tickers(fname) # Create column headers investing_colnames = make_cashflow_colnames('investing') operating_colnames = make_cashflow_colnames('operating') # init dict to store results info_dict = {k: [] for k in ('ticker', 'company', 'industry', 'filing_type', 'filing_date', 'dler_info', 'total_outstanding_shares', 'investing_activity_label', *investing_colnames, 'operating_activity_label', *operating_colnames,)} for i, ticker in enumerate(tickers, 1): # Meta + Excel File ETL filing, soup = get_filing_soup(ticker) company = get_company(soup) industry = get_industry(soup) xl_url, fdate = get_xl_url(filing, soup) xl_file, dler_info = get_xl_file(xl_url) # Print ticker info in command line nl = '\n' print(f'#: {i}{nl}' f'TICKER: {ticker}{nl}' f'COMPANY: {company}{nl}' f'INDUSTRY: {industry}{nl}' f'FILING: {filing}{nl}' f'FDATE: {fdate}{nl}' f'EXCEL: {xl_url}') # random sleep ~0.5 second per ticker. ~120 tickers per min # to avoid getting timed out time.sleep(random()/2) # change to results dict info_dict['ticker'].append(ticker) info_dict['company'].append(company) info_dict['industry'].append(industry) info_dict['filing_type'].append(filing) info_dict['filing_date'].append(fdate) info_dict['dler_info'].append(dler_info) if xl_file and filing == '10-K': # get & log TOTAL OUTSTANDING SHARES (possible to log nan) total_outstanding_shares = get_total_outstanding_shares(xl_file) info_dict['total_outstanding_shares'].append(total_outstanding_shares) # ETL cashflow data cashflow_df, cashflow_col = wrapper_get_cashflow_df_and_col(xl_file) if (cashflow_df is None) or (cashflow_col is None): # log INVESTING ACTIVITIES info_dict['investing_activity_label'].append(np.nan) info_dict = update_dict_with_cashflow_vals(info_dict, investing_colnames, len(investing_colnames) * [np.nan]) # log OPERATING ACTIVITIES info_dict['operating_activity_label'].append(np.nan) info_dict = update_dict_with_cashflow_vals(info_dict, operating_colnames, len(operating_colnames) * [np.nan]) else: # get & log INVESTING ACTIVITY ia_values, ia_label = wrapper_get_ia_values_and_label(cashflow_df, cashflow_col) if (ia_values is None) or (ia_label is None): info_dict['investing_activity_label'].append(np.nan) info_dict = update_dict_with_cashflow_vals(info_dict, investing_colnames, len(investing_colnames) * [np.nan]) else: # IA ACTUALY LOGGING print('Investing Activity: ', ia_label) info_dict['investing_activity_label'].append(ia_label) info_dict = update_dict_with_cashflow_vals(info_dict, investing_colnames, ia_values) # get & log OPERATING ACTIVITY oa_values, oa_label = wrapper_get_oa_values_and_label(cashflow_df, cashflow_col) if (oa_values is None) or (oa_label is None): info_dict['operating_activity_label'].append(np.nan) info_dict = update_dict_with_cashflow_vals(info_dict, operating_colnames, len(operating_colnames) * [np.nan]) else: # OA ACTUALY LOGGING print('Operating Activity: ', oa_label) info_dict['operating_activity_label'].append(oa_label) info_dict = update_dict_with_cashflow_vals(info_dict, operating_colnames, oa_values) else: # log total outstanding shares info_dict['total_outstanding_shares'].append(np.nan) # log INVESTING ACTIVITIES info_dict['investing_activity_label'].append(np.nan) info_dict = update_dict_with_cashflow_vals(info_dict, investing_colnames, len(investing_colnames) * [np.nan]) # log OPERATING ACTIVITIES info_dict['operating_activity_label'].append(np.nan) info_dict = update_dict_with_cashflow_vals(info_dict, operating_colnames, len(operating_colnames) * [np.nan]) print() # convert to datafame & save info_df = pd.DataFrame(data=info_dict) spath = get_save_path(fname, OSX) info_df.to_csv(spath, index=False) print('#'*50) print(f'FINISHED') print('#'*50)
""" Put script documenation here. """ import logging log = logging.getLogger(__name__) log.addHandler(logging.NullHandler()) from pymeasure.instruments.rohdeschwarz import RohdeSMB100A, RohdeFSQ from pymeasure.instruments.agilent import Agilent4156, AgilentU2040X from pymeasure.log import console_log, file_log import numpy as np import pandas as pd class HarmonicTest(object): """ Class to define all the calibration and measurement sequences to measure harmonics and power handling in RF switch and amplifier devices. """ def __init__(self, sweep_values, **kwargs): self.smu = Agilent4156( "GPIB0::25", read_termination='\n', write_termination='\n', timeout=None) log.info('Connected to {}'.format(self.smu.id)) self.pm = AgilentU2040X( 'USB0::0x2A8D::0x1E01::MY56360005::0::INSTR', timeout=None) log.info('Connected to {}'.format(self.pm.id)) self.sig = RohdeSMB100A( "GPIB0::29", read_termination='\n', write_termination='\n', timeout=None) log.info('Connected to {}'.format(self.sig.id)) self.sa = RohdeFSQ("GPIB0::20", read_termination='\n', write_termination='\n', timeout=None) log.info('Connected to {}'.format(self.sa.id)) self.DEV_TYPE = 'SWITCH' # or AMPLIFIER self.FUND_FREQ = 983 # MHz self.CAL_POWER = -50 # dBm self.RES_BW = 100 # Hz self.FREQ_SPAN = 1 # kHz self.SWP_AVG = 10 self.SWEEP_VALUES = sweep_values self.NHARMONICS = 3 def setup_instruments(self): """ Apply instrument settings """ # signal generator self.sig.reset() self.sig.freq_unit = 'MHz' self.sig.power_unit = 'DBM' self.sig.fixed_freq = self.FUND_FREQ self.sig.power_level = self.CAL_POWER log.info('Finished setting up signal generator.') # signal analyzer self.sa.reset() self.sa.freq_unit = 'MHz' self.sa.power_unit = 'DBM' self.sa.center_freq = self.FUND_FREQ self.sa.freq_unit = 'kHz' self.sa.freq_span = self.FREQ_SPAN self.sa.freq_unit = 'Hz' self.sa.res_bw = self.RES_BW self.sa.video_bw = self.RES_BW self.sa.sweep_count = self.SWP_AVG self.sa.continuous_mode = 'OFF' self.sa.all_markers_off() self.sa.freq_counter = 'ON' self.freq_unit = 'MHz' log.info('Finished setting up signal analyzer.') # setup power meter self.pm.reset() self.pm.freq_unit = 'MHZ' self.pm.freq = self.FUND_FREQ self.pm.continuous_mode = 'OFF' log.info('Finished setting up power meter.') def input_calibration(self): """ Determine input loss offset via power calibration """ input( 'Connect power meter to probe end of the input RF cable. Press any Enter to continue...') self.sig.output = 'ON' self.sig.power_level = self.CAL_POWER self.pm.init() self.INPUT_OFFSET = self.pm.read self.sig.output = 'OFF'
def avi_to_fits(data_list_entry): import os from astropy.io import fits import numpy as np import cv2 from PIL import Image l = 1 if data_list_entry.data_mode == "single": n = 1 if data_list_entry.data_filedata.endswith(".avi"): filepath = os.path.dirname(os.path.realpath(data_list_entry.data_filedata)) actual_file = os.path.basename(data_list_entry.data_filedata) video_capture = cv2.VideoCapture(data_list_entry.data_filedata) filename = actual_file.replace(".avi", "") folder = filepath+"/"+filename if not os.path.exists(folder): os.mkdir(folder) if data_list_entry.data_type == "raw": imtype = "frame" else: imtype = data_list_entry.data_type while True: filename_initial = folder+"/"+str(imtype)+"_"+str(l)+"_"+str(n)+".fits" if os.path.isfile(filename_initial): l = l + 1 else: break while True: ret, frame = video_capture.read() if (ret != True): break filename_tif = folder+"/"+str(imtype)+"_"+str(l)+"_"+str(n)+".tif" filename_fits = filename_tif.replace(".tif", ".fits") cv2.imwrite(filename_tif, frame) hdu = fits.PrimaryHDU() im = Image.open(filename_tif) hdu.data = np.array(im) hdu.writeto(filename_fits, overwrite=True) if data_list_entry.state == False: os.remove(filename_tif) n += 1 else: wrn_dialog = Gtk.Dialog(self, 0, Gtk.MessageType.WARNING, Gtk.ButtonsType.OK, "File not AVI") wrn_dialog.format_secondary_text("The selected file is not an avi and cannot be split into FITS") wrn_dialog.run() wrn_dialog.destroy() return(1) elif data_list_entry.data_mode == "group": for file in os.listdir(data_list_entry.data_filedata): n = 1 if file.endswith(".avi"): filepath = data_list_entry.data_filedata actual_file = data_list_entry.data_filedata+"/"+file video_capture = cv2.VideoCapture(actual_file) filename = file.replace(".avi", "") folder = filepath+"/"+filename if not os.path.exists(folder): os.mkdir(folder) if data_list_entry.data_type == "raw": imtype = "frame" else: imtype = data_list_entry.data_type while True: filename_initial = folder+"/"+str(imtype)+"_"+str(l)+"_"+str(n)+".fits" if os.path.isfile(filename_initial): l = l + 1 else: break while True: ret, frame = video_capture.read() if (ret != True): break filename_tif = folder+"/"+str(imtype)+"_"+str(l)+"_"+str(n)+".tif" filename_fits = filename_tif.replace(".tif", ".fits") cv2.imwrite(filename_tif, frame) hdu = fits.PrimaryHDU() im = Image.open(filename_tif) hdu.data = np.array(im) hdu.writeto(filename_fits, overwrite=True) if data_list_entry.state == False: os.remove(filename_tif) n += 1 l += 1 else: pass else: print("Data mode error") return(2) return(0)
#Matplotlib Plotting #Autor: Javier Arturo Hernández Sosa #Fecha: 30/Sep/2017 #Descripcion: Ejecicios Matplotlib Plotting - Alexandre Devert - (17) #Gráfico de Barras Apilado import numpy as np import matplotlib.pyplot as plt data = np.array([[5., 30., 45., 22.], [5., 25., 50., 20.], [1., 2., 1., 1.]]) color_list=['b', 'g', 'r'] #La función .shape() regresa las filas y las columnas # .shape[0] número de filas .shape[1] número de columnas X=np.arange(data.shape[1]) #Esto es igual a X=np.arange(4) for i in range(data.shape[0]): #Esto es igual a range(3) plt.bar(X, data[i], # np.sum() suma los arrays dados en el eje x-> axis=0 (filas), eje y ->y axis=1 (columnas) bottom=np.sum(data[:i], axis=0), color=color_list[i%len(color_list)]) plt.show()